Test Report: QEMU_macOS 19374

                    
                      1bf5b6cb3e281fd50d6a0e1f3835234e48601115:2024-08-05:35661
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.98
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.95
36 TestAddons/Setup 10.15
37 TestCertOptions 10.2
38 TestCertExpiration 195.45
39 TestDockerFlags 10.47
40 TestForceSystemdFlag 11.01
41 TestForceSystemdEnv 10.48
47 TestErrorSpam/setup 9.89
56 TestFunctional/serial/StartWithProxy 9.91
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.15
70 TestFunctional/serial/MinikubeKubectlCmd 0.74
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.97
72 TestFunctional/serial/ExtraConfig 5.21
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.17
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.29
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.29
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.07
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
108 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 77.3
109 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
110 TestFunctional/parallel/ServiceCmd/List 0.04
111 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
112 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
113 TestFunctional/parallel/ServiceCmd/Format 0.04
114 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/Version/components 0.04
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.03
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.03
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.03
127 TestFunctional/parallel/ImageCommands/ImageBuild 0.11
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.3
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.28
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.03
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
136 TestFunctional/parallel/DockerEnv/bash 0.05
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 39.68
150 TestMultiControlPlane/serial/StartCluster 9.84
151 TestMultiControlPlane/serial/DeployApp 81.99
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.08
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.08
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.08
159 TestMultiControlPlane/serial/RestartSecondaryNode 49.16
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.08
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 9
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
164 TestMultiControlPlane/serial/StopCluster 1.94
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
167 TestMultiControlPlane/serial/AddSecondaryNode 0.07
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.08
171 TestImageBuild/serial/Setup 9.9
174 TestJSONOutput/start/Command 9.84
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.2
206 TestMountStart/serial/StartWithMountFirst 9.93
209 TestMultiNode/serial/FreshStart2Nodes 9.93
210 TestMultiNode/serial/DeployApp2Nodes 101.81
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.07
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.08
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 47.4
218 TestMultiNode/serial/RestartKeepsNodes 8.54
219 TestMultiNode/serial/DeleteNode 0.1
220 TestMultiNode/serial/StopMultiNode 2.18
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 20.33
226 TestPreload 9.93
228 TestScheduledStopUnix 10
229 TestSkaffold 12.84
232 TestRunningBinaryUpgrade 626.41
234 TestKubernetesUpgrade 17.5
248 TestStoppedBinaryUpgrade/Upgrade 589.63
258 TestPause/serial/Start 9.92
261 TestNoKubernetes/serial/StartWithK8s 9.89
262 TestNoKubernetes/serial/StartWithStopK8s 7.42
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.66
264 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.51
265 TestNoKubernetes/serial/Start 5.27
269 TestNoKubernetes/serial/StartNoArgs 5.38
271 TestNetworkPlugins/group/auto/Start 9.91
272 TestNetworkPlugins/group/kindnet/Start 9.8
273 TestNetworkPlugins/group/calico/Start 9.84
274 TestNetworkPlugins/group/custom-flannel/Start 9.83
275 TestNetworkPlugins/group/false/Start 9.96
276 TestNetworkPlugins/group/enable-default-cni/Start 9.92
277 TestNetworkPlugins/group/flannel/Start 9.82
278 TestNetworkPlugins/group/bridge/Start 9.77
279 TestNetworkPlugins/group/kubenet/Start 10.02
281 TestStartStop/group/old-k8s-version/serial/FirstStart 9.98
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.1
292 TestStartStop/group/no-preload/serial/FirstStart 10.02
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/no-preload/serial/SecondStart 5.26
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.1
303 TestStartStop/group/embed-certs/serial/FirstStart 9.92
304 TestStartStop/group/embed-certs/serial/DeployApp 0.09
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
308 TestStartStop/group/embed-certs/serial/SecondStart 5.25
309 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
310 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
311 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
312 TestStartStop/group/embed-certs/serial/Pause 0.1
314 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.06
316 TestStartStop/group/newest-cni/serial/FirstStart 9.96
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.28
326 TestStartStop/group/newest-cni/serial/SecondStart 5.26
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (10.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-834000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-834000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (10.983152s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cbd2b8fe-5b19-4422-8ecd-078afae3dd4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-834000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c63506c7-d759-43c0-ba41-be0069fb9a95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19374"}}
	{"specversion":"1.0","id":"073663e3-0d78-4647-9cce-d85b8f27018d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig"}}
	{"specversion":"1.0","id":"2802343e-c13f-4d22-835b-1b317b734435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"6c29eb96-9321-488d-bffd-9b6d364a358c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4ef4d722-1ab2-491a-9830-6c2de17ad8e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube"}}
	{"specversion":"1.0","id":"c3c1d89b-6bcf-4e56-8f5a-25579b6c37d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"e635cdc0-6393-4c30-bd71-0cbb1ed160aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb88ffed-cad8-4c83-a6a9-01825c0531e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"12cc5d5b-e31c-423b-a934-b4b09907b0e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd5c1b3a-1121-4c9e-9ffb-a9dfff9caea6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-834000\" primary control-plane node in \"download-only-834000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"12f2971f-5fcd-496e-8ff1-4f95cda93f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"414fc1fe-5234-4258-a7b4-c48367165d12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0] Decompressors:map[bz2:0x1400048ff90 gz:0x1400048ff98 tar:0x1400048ff40 tar.bz2:0x1400048ff50 tar.gz:0x1400048ff60 tar.xz:0x1400048ff70 tar.zst:0x1400048ff80 tbz2:0x1400048ff50 tgz:0x14
00048ff60 txz:0x1400048ff70 tzst:0x1400048ff80 xz:0x1400048ffa0 zip:0x1400048ffb0 zst:0x1400048ffa8] Getters:map[file:0x14000063830 http:0x14000844730 https:0x14000844780] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"531b6e34-662a-4d66-9981-cf71f41a0f74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:25:29.360394    7009 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:25:29.360546    7009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:25:29.360549    7009 out.go:304] Setting ErrFile to fd 2...
	I0805 10:25:29.360551    7009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:25:29.360684    7009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	W0805 10:25:29.360768    7009 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19374-6507/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19374-6507/.minikube/config/config.json: no such file or directory
	I0805 10:25:29.362145    7009 out.go:298] Setting JSON to true
	I0805 10:25:29.380060    7009 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5099,"bootTime":1722873630,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:25:29.380150    7009 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:25:29.384923    7009 out.go:97] [download-only-834000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:25:29.385051    7009 notify.go:220] Checking for updates...
	W0805 10:25:29.385083    7009 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball: no such file or directory
	I0805 10:25:29.387624    7009 out.go:169] MINIKUBE_LOCATION=19374
	I0805 10:25:29.391277    7009 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:25:29.395632    7009 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:25:29.398666    7009 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:25:29.401710    7009 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	W0805 10:25:29.407684    7009 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 10:25:29.407900    7009 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:25:29.411656    7009 out.go:97] Using the qemu2 driver based on user configuration
	I0805 10:25:29.411675    7009 start.go:297] selected driver: qemu2
	I0805 10:25:29.411688    7009 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:25:29.411751    7009 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:25:29.414651    7009 out.go:169] Automatically selected the socket_vmnet network
	I0805 10:25:29.419991    7009 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 10:25:29.420101    7009 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 10:25:29.420160    7009 cni.go:84] Creating CNI manager for ""
	I0805 10:25:29.420179    7009 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 10:25:29.420235    7009 start.go:340] cluster config:
	{Name:download-only-834000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-834000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:25:29.424166    7009 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:25:29.428732    7009 out.go:97] Downloading VM boot image ...
	I0805 10:25:29.428753    7009 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0805 10:25:33.912534    7009 out.go:97] Starting "download-only-834000" primary control-plane node in "download-only-834000" cluster
	I0805 10:25:33.912563    7009 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 10:25:33.969214    7009 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 10:25:33.969222    7009 cache.go:56] Caching tarball of preloaded images
	I0805 10:25:33.969371    7009 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 10:25:33.973644    7009 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0805 10:25:33.973651    7009 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:34.051778    7009 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 10:25:39.225091    7009 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:39.225222    7009 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:39.927063    7009 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 10:25:39.927275    7009 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/download-only-834000/config.json ...
	I0805 10:25:39.927292    7009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/download-only-834000/config.json: {Name:mk34c7f5922259b3af4097cf016aa54c3298cc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:25:39.927962    7009 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 10:25:39.928252    7009 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0805 10:25:40.266233    7009 out.go:169] 
	W0805 10:25:40.272309    7009 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0] Decompressors:map[bz2:0x1400048ff90 gz:0x1400048ff98 tar:0x1400048ff40 tar.bz2:0x1400048ff50 tar.gz:0x1400048ff60 tar.xz:0x1400048ff70 tar.zst:0x1400048ff80 tbz2:0x1400048ff50 tgz:0x1400048ff60 txz:0x1400048ff70 tzst:0x1400048ff80 xz:0x1400048ffa0 zip:0x1400048ffb0 zst:0x1400048ffa8] Getters:map[file:0x14000063830 http:0x14000844730 https:0x14000844780] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0805 10:25:40.272345    7009 out_reason.go:110] 
	W0805 10:25:40.280264    7009 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:25:40.284255    7009 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-834000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (10.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-828000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-828000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.80142575s)

                                                
                                                
-- stdout --
	* [offline-docker-828000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-828000" primary control-plane node in "offline-docker-828000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-828000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:36:40.955897    8834 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:36:40.956066    8834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:36:40.956069    8834 out.go:304] Setting ErrFile to fd 2...
	I0805 10:36:40.956072    8834 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:36:40.956229    8834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:36:40.957516    8834 out.go:298] Setting JSON to false
	I0805 10:36:40.975237    8834 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5770,"bootTime":1722873630,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:36:40.975314    8834 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:36:40.979619    8834 out.go:177] * [offline-docker-828000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:36:40.983614    8834 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:36:40.983626    8834 notify.go:220] Checking for updates...
	I0805 10:36:40.992576    8834 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:36:40.995583    8834 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:36:40.998679    8834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:36:41.001629    8834 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:36:41.004595    8834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:36:41.008215    8834 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:36:41.008268    8834 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:36:41.012528    8834 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:36:41.019587    8834 start.go:297] selected driver: qemu2
	I0805 10:36:41.019599    8834 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:36:41.019607    8834 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:36:41.021533    8834 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:36:41.024580    8834 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:36:41.027709    8834 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:36:41.027725    8834 cni.go:84] Creating CNI manager for ""
	I0805 10:36:41.027731    8834 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:36:41.027734    8834 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:36:41.027764    8834 start.go:340] cluster config:
	{Name:offline-docker-828000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:36:41.031269    8834 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:41.038550    8834 out.go:177] * Starting "offline-docker-828000" primary control-plane node in "offline-docker-828000" cluster
	I0805 10:36:41.042610    8834 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:36:41.042650    8834 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:36:41.042661    8834 cache.go:56] Caching tarball of preloaded images
	I0805 10:36:41.042730    8834 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:36:41.042735    8834 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:36:41.042799    8834 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/offline-docker-828000/config.json ...
	I0805 10:36:41.042811    8834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/offline-docker-828000/config.json: {Name:mkce0f84cea8090be68d51ed4e0fe020bc50f61c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:36:41.043129    8834 start.go:360] acquireMachinesLock for offline-docker-828000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:36:41.043163    8834 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "offline-docker-828000"
	I0805 10:36:41.043174    8834 start.go:93] Provisioning new machine with config: &{Name:offline-docker-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:36:41.043203    8834 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:36:41.051598    8834 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 10:36:41.067822    8834 start.go:159] libmachine.API.Create for "offline-docker-828000" (driver="qemu2")
	I0805 10:36:41.067858    8834 client.go:168] LocalClient.Create starting
	I0805 10:36:41.067935    8834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:36:41.067967    8834 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:41.067978    8834 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:41.068018    8834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:36:41.068043    8834 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:41.068050    8834 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:41.068437    8834 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:36:41.216560    8834 main.go:141] libmachine: Creating SSH key...
	I0805 10:36:41.315955    8834 main.go:141] libmachine: Creating Disk image...
	I0805 10:36:41.315970    8834 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:36:41.316210    8834 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2
	I0805 10:36:41.325823    8834 main.go:141] libmachine: STDOUT: 
	I0805 10:36:41.325846    8834 main.go:141] libmachine: STDERR: 
	I0805 10:36:41.325904    8834 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2 +20000M
	I0805 10:36:41.344060    8834 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:36:41.344084    8834 main.go:141] libmachine: STDERR: 
	I0805 10:36:41.344105    8834 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2
	I0805 10:36:41.344111    8834 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:36:41.344119    8834 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:36:41.344151    8834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:21:b5:74:c2:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2
	I0805 10:36:41.345990    8834 main.go:141] libmachine: STDOUT: 
	I0805 10:36:41.346007    8834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:36:41.346024    8834 client.go:171] duration metric: took 278.166542ms to LocalClient.Create
	I0805 10:36:43.348062    8834 start.go:128] duration metric: took 2.304883541s to createHost
	I0805 10:36:43.348078    8834 start.go:83] releasing machines lock for "offline-docker-828000", held for 2.304941667s
	W0805 10:36:43.348106    8834 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:36:43.364175    8834 out.go:177] * Deleting "offline-docker-828000" in qemu2 ...
	W0805 10:36:43.373574    8834 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:36:43.373584    8834 start.go:729] Will try again in 5 seconds ...
	I0805 10:36:48.375747    8834 start.go:360] acquireMachinesLock for offline-docker-828000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:36:48.376286    8834 start.go:364] duration metric: took 425.5µs to acquireMachinesLock for "offline-docker-828000"
	I0805 10:36:48.376452    8834 start.go:93] Provisioning new machine with config: &{Name:offline-docker-828000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-828000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:36:48.376664    8834 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:36:48.393943    8834 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 10:36:48.443510    8834 start.go:159] libmachine.API.Create for "offline-docker-828000" (driver="qemu2")
	I0805 10:36:48.443560    8834 client.go:168] LocalClient.Create starting
	I0805 10:36:48.443671    8834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:36:48.443741    8834 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:48.443760    8834 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:48.443833    8834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:36:48.443876    8834 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:48.443896    8834 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:48.444456    8834 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:36:48.603357    8834 main.go:141] libmachine: Creating SSH key...
	I0805 10:36:48.665893    8834 main.go:141] libmachine: Creating Disk image...
	I0805 10:36:48.665898    8834 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:36:48.666092    8834 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2
	I0805 10:36:48.675199    8834 main.go:141] libmachine: STDOUT: 
	I0805 10:36:48.675218    8834 main.go:141] libmachine: STDERR: 
	I0805 10:36:48.675268    8834 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2 +20000M
	I0805 10:36:48.683007    8834 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:36:48.683020    8834 main.go:141] libmachine: STDERR: 
	I0805 10:36:48.683031    8834 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2
	I0805 10:36:48.683035    8834 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:36:48.683048    8834 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:36:48.683071    8834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:ba:59:61:7b:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/offline-docker-828000/disk.qcow2
	I0805 10:36:48.684609    8834 main.go:141] libmachine: STDOUT: 
	I0805 10:36:48.684625    8834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:36:48.684636    8834 client.go:171] duration metric: took 241.072792ms to LocalClient.Create
	I0805 10:36:50.686788    8834 start.go:128] duration metric: took 2.310125708s to createHost
	I0805 10:36:50.686836    8834 start.go:83] releasing machines lock for "offline-docker-828000", held for 2.310557459s
	W0805 10:36:50.687223    8834 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-828000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-828000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:36:50.695768    8834 out.go:177] 
	W0805 10:36:50.701923    8834 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:36:50.701946    8834 out.go:239] * 
	* 
	W0805 10:36:50.704556    8834 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:36:50.712806    8834 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-828000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-05 10:36:50.729248 -0700 PDT m=+681.438173918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-828000 -n offline-docker-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-828000 -n offline-docker-828000: exit status 7 (64.085041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-828000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-828000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-828000
--- FAIL: TestOffline (9.95s)

                                                
                                    
x
+
TestAddons/Setup (10.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-690000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-690000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.143014583s)

                                                
                                                
-- stdout --
	* [addons-690000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-690000" primary control-plane node in "addons-690000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-690000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:26:11.156767    7126 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:26:11.156926    7126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:26:11.156929    7126 out.go:304] Setting ErrFile to fd 2...
	I0805 10:26:11.156931    7126 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:26:11.157066    7126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:26:11.158127    7126 out.go:298] Setting JSON to false
	I0805 10:26:11.174081    7126 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5141,"bootTime":1722873630,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:26:11.174152    7126 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:26:11.177384    7126 out.go:177] * [addons-690000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:26:11.184422    7126 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:26:11.184504    7126 notify.go:220] Checking for updates...
	I0805 10:26:11.191412    7126 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:26:11.194387    7126 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:26:11.197406    7126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:26:11.200357    7126 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:26:11.203394    7126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:26:11.206542    7126 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:26:11.210362    7126 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:26:11.216296    7126 start.go:297] selected driver: qemu2
	I0805 10:26:11.216302    7126 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:26:11.216307    7126 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:26:11.218717    7126 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:26:11.221370    7126 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:26:11.224463    7126 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:26:11.224485    7126 cni.go:84] Creating CNI manager for ""
	I0805 10:26:11.224494    7126 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:26:11.224499    7126 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:26:11.224523    7126 start.go:340] cluster config:
	{Name:addons-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:26:11.228452    7126 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:26:11.237368    7126 out.go:177] * Starting "addons-690000" primary control-plane node in "addons-690000" cluster
	I0805 10:26:11.241389    7126 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:26:11.241407    7126 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:26:11.241416    7126 cache.go:56] Caching tarball of preloaded images
	I0805 10:26:11.241465    7126 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:26:11.241471    7126 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:26:11.241652    7126 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/addons-690000/config.json ...
	I0805 10:26:11.241662    7126 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/addons-690000/config.json: {Name:mk58a25d79a2091fdbb5c7909457ace8d0fa2149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:26:11.242080    7126 start.go:360] acquireMachinesLock for addons-690000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:26:11.242148    7126 start.go:364] duration metric: took 61.416µs to acquireMachinesLock for "addons-690000"
	I0805 10:26:11.242164    7126 start.go:93] Provisioning new machine with config: &{Name:addons-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:26:11.242199    7126 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:26:11.246437    7126 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0805 10:26:11.265746    7126 start.go:159] libmachine.API.Create for "addons-690000" (driver="qemu2")
	I0805 10:26:11.265787    7126 client.go:168] LocalClient.Create starting
	I0805 10:26:11.265927    7126 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:26:11.331086    7126 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:26:11.459357    7126 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:26:11.700112    7126 main.go:141] libmachine: Creating SSH key...
	I0805 10:26:11.825274    7126 main.go:141] libmachine: Creating Disk image...
	I0805 10:26:11.825290    7126 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:26:11.825494    7126 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2
	I0805 10:26:11.835214    7126 main.go:141] libmachine: STDOUT: 
	I0805 10:26:11.835239    7126 main.go:141] libmachine: STDERR: 
	I0805 10:26:11.835298    7126 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2 +20000M
	I0805 10:26:11.843087    7126 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:26:11.843100    7126 main.go:141] libmachine: STDERR: 
	I0805 10:26:11.843117    7126 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2
	I0805 10:26:11.843122    7126 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:26:11.843151    7126 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:26:11.843183    7126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:1b:66:0c:66:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2
	I0805 10:26:11.844783    7126 main.go:141] libmachine: STDOUT: 
	I0805 10:26:11.844795    7126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:26:11.844813    7126 client.go:171] duration metric: took 579.031375ms to LocalClient.Create
	I0805 10:26:13.846986    7126 start.go:128] duration metric: took 2.604803167s to createHost
	I0805 10:26:13.847072    7126 start.go:83] releasing machines lock for "addons-690000", held for 2.604960625s
	W0805 10:26:13.847173    7126 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:26:13.862236    7126 out.go:177] * Deleting "addons-690000" in qemu2 ...
	W0805 10:26:13.889309    7126 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:26:13.889343    7126 start.go:729] Will try again in 5 seconds ...
	I0805 10:26:18.891537    7126 start.go:360] acquireMachinesLock for addons-690000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:26:18.892016    7126 start.go:364] duration metric: took 351.25µs to acquireMachinesLock for "addons-690000"
	I0805 10:26:18.892149    7126 start.go:93] Provisioning new machine with config: &{Name:addons-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:addons-690000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:26:18.892440    7126 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:26:18.904128    7126 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0805 10:26:18.954015    7126 start.go:159] libmachine.API.Create for "addons-690000" (driver="qemu2")
	I0805 10:26:18.954070    7126 client.go:168] LocalClient.Create starting
	I0805 10:26:18.954203    7126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:26:18.954261    7126 main.go:141] libmachine: Decoding PEM data...
	I0805 10:26:18.954283    7126 main.go:141] libmachine: Parsing certificate...
	I0805 10:26:18.954374    7126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:26:18.954422    7126 main.go:141] libmachine: Decoding PEM data...
	I0805 10:26:18.954437    7126 main.go:141] libmachine: Parsing certificate...
	I0805 10:26:18.955137    7126 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:26:19.114334    7126 main.go:141] libmachine: Creating SSH key...
	I0805 10:26:19.209088    7126 main.go:141] libmachine: Creating Disk image...
	I0805 10:26:19.209093    7126 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:26:19.209288    7126 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2
	I0805 10:26:19.218672    7126 main.go:141] libmachine: STDOUT: 
	I0805 10:26:19.218694    7126 main.go:141] libmachine: STDERR: 
	I0805 10:26:19.218746    7126 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2 +20000M
	I0805 10:26:19.226753    7126 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:26:19.226792    7126 main.go:141] libmachine: STDERR: 
	I0805 10:26:19.226806    7126 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2
	I0805 10:26:19.226810    7126 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:26:19.226819    7126 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:26:19.226847    7126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:44:9b:2b:c7:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/addons-690000/disk.qcow2
	I0805 10:26:19.228518    7126 main.go:141] libmachine: STDOUT: 
	I0805 10:26:19.228531    7126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:26:19.228545    7126 client.go:171] duration metric: took 274.474667ms to LocalClient.Create
	I0805 10:26:21.230684    7126 start.go:128] duration metric: took 2.338254208s to createHost
	I0805 10:26:21.230733    7126 start.go:83] releasing machines lock for "addons-690000", held for 2.338736333s
	W0805 10:26:21.231162    7126 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:26:21.242600    7126 out.go:177] 
	W0805 10:26:21.246635    7126 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:26:21.246656    7126 out.go:239] * 
	* 
	W0805 10:26:21.248642    7126 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:26:21.257642    7126 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-darwin-arm64 start -p addons-690000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.15s)

                                                
                                    
x
+
TestCertOptions (10.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-759000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-759000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.945936333s)

                                                
                                                
-- stdout --
	* [cert-options-759000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-759000" primary control-plane node in "cert-options-759000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-759000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-759000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-759000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-759000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-759000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.976875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-759000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-759000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-759000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-759000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-759000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-759000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.231167ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-759000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-759000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-759000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-759000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-759000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-05 10:48:22.759251 -0700 PDT m=+1373.477409834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-759000 -n cert-options-759000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-759000 -n cert-options-759000: exit status 7 (30.180583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-759000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-759000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-759000
--- FAIL: TestCertOptions (10.20s)

                                                
                                    
x
+
TestCertExpiration (195.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-440000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-440000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.06544025s)

                                                
                                                
-- stdout --
	* [cert-expiration-440000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-440000" primary control-plane node in "cert-expiration-440000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-440000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-440000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-440000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-440000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-440000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.233790333s)

                                                
                                                
-- stdout --
	* [cert-expiration-440000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-440000" primary control-plane node in "cert-expiration-440000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-440000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-440000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-440000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-440000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-440000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-440000" primary control-plane node in "cert-expiration-440000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-440000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-440000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-440000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-05 10:51:12.356037 -0700 PDT m=+1543.076459084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-440000 -n cert-expiration-440000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-440000 -n cert-expiration-440000: exit status 7 (66.439584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-440000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-440000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-440000
--- FAIL: TestCertExpiration (195.45s)

                                                
                                    
x
+
TestDockerFlags (10.47s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-804000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-804000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.237987166s)

                                                
                                                
-- stdout --
	* [docker-flags-804000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-804000" primary control-plane node in "docker-flags-804000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-804000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:48:02.220018    9606 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:48:02.220184    9606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:48:02.220187    9606 out.go:304] Setting ErrFile to fd 2...
	I0805 10:48:02.220190    9606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:48:02.220331    9606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:48:02.221406    9606 out.go:298] Setting JSON to false
	I0805 10:48:02.237435    9606 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6452,"bootTime":1722873630,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:48:02.237503    9606 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:48:02.244078    9606 out.go:177] * [docker-flags-804000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:48:02.250918    9606 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:48:02.250991    9606 notify.go:220] Checking for updates...
	I0805 10:48:02.257400    9606 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:48:02.260921    9606 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:48:02.263954    9606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:48:02.266964    9606 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:48:02.269923    9606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:48:02.273303    9606 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:48:02.273381    9606 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:48:02.273430    9606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:48:02.277978    9606 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:48:02.284868    9606 start.go:297] selected driver: qemu2
	I0805 10:48:02.284875    9606 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:48:02.284881    9606 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:48:02.287224    9606 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:48:02.289912    9606 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:48:02.293050    9606 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0805 10:48:02.293104    9606 cni.go:84] Creating CNI manager for ""
	I0805 10:48:02.293111    9606 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:48:02.293115    9606 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:48:02.293162    9606 start.go:340] cluster config:
	{Name:docker-flags-804000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:48:02.296914    9606 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:48:02.304744    9606 out.go:177] * Starting "docker-flags-804000" primary control-plane node in "docker-flags-804000" cluster
	I0805 10:48:02.308886    9606 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:48:02.308905    9606 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:48:02.308917    9606 cache.go:56] Caching tarball of preloaded images
	I0805 10:48:02.308985    9606 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:48:02.308991    9606 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:48:02.309060    9606 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/docker-flags-804000/config.json ...
	I0805 10:48:02.309077    9606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/docker-flags-804000/config.json: {Name:mkeda947e46093327c6d7be7799760147708d087 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:48:02.309471    9606 start.go:360] acquireMachinesLock for docker-flags-804000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:48:02.309509    9606 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "docker-flags-804000"
	I0805 10:48:02.309521    9606 start.go:93] Provisioning new machine with config: &{Name:docker-flags-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:48:02.309553    9606 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:48:02.312910    9606 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 10:48:02.330800    9606 start.go:159] libmachine.API.Create for "docker-flags-804000" (driver="qemu2")
	I0805 10:48:02.330830    9606 client.go:168] LocalClient.Create starting
	I0805 10:48:02.330903    9606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:48:02.330934    9606 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:02.330946    9606 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:02.330983    9606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:48:02.331008    9606 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:02.331015    9606 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:02.331371    9606 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:48:02.481127    9606 main.go:141] libmachine: Creating SSH key...
	I0805 10:48:02.725885    9606 main.go:141] libmachine: Creating Disk image...
	I0805 10:48:02.725894    9606 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:48:02.726164    9606 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2
	I0805 10:48:02.735925    9606 main.go:141] libmachine: STDOUT: 
	I0805 10:48:02.735944    9606 main.go:141] libmachine: STDERR: 
	I0805 10:48:02.735997    9606 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2 +20000M
	I0805 10:48:02.743812    9606 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:48:02.743826    9606 main.go:141] libmachine: STDERR: 
	I0805 10:48:02.743843    9606 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2
	I0805 10:48:02.743846    9606 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:48:02.743859    9606 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:48:02.743887    9606 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:aa:85:22:91:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2
	I0805 10:48:02.745483    9606 main.go:141] libmachine: STDOUT: 
	I0805 10:48:02.745499    9606 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:48:02.745514    9606 client.go:171] duration metric: took 414.682792ms to LocalClient.Create
	I0805 10:48:04.747651    9606 start.go:128] duration metric: took 2.438113s to createHost
	I0805 10:48:04.747696    9606 start.go:83] releasing machines lock for "docker-flags-804000", held for 2.438209042s
	W0805 10:48:04.747753    9606 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:04.769660    9606 out.go:177] * Deleting "docker-flags-804000" in qemu2 ...
	W0805 10:48:04.790347    9606 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:04.790363    9606 start.go:729] Will try again in 5 seconds ...
	I0805 10:48:09.792542    9606 start.go:360] acquireMachinesLock for docker-flags-804000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:48:09.793064    9606 start.go:364] duration metric: took 394.875µs to acquireMachinesLock for "docker-flags-804000"
	I0805 10:48:09.793234    9606 start.go:93] Provisioning new machine with config: &{Name:docker-flags-804000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-804000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:48:09.793572    9606 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:48:09.803035    9606 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 10:48:09.855436    9606 start.go:159] libmachine.API.Create for "docker-flags-804000" (driver="qemu2")
	I0805 10:48:09.855481    9606 client.go:168] LocalClient.Create starting
	I0805 10:48:09.855595    9606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:48:09.855643    9606 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:09.855666    9606 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:09.855729    9606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:48:09.855763    9606 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:09.855780    9606 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:09.856293    9606 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:48:10.019927    9606 main.go:141] libmachine: Creating SSH key...
	I0805 10:48:10.368081    9606 main.go:141] libmachine: Creating Disk image...
	I0805 10:48:10.368098    9606 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:48:10.368330    9606 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2
	I0805 10:48:10.378032    9606 main.go:141] libmachine: STDOUT: 
	I0805 10:48:10.378058    9606 main.go:141] libmachine: STDERR: 
	I0805 10:48:10.378104    9606 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2 +20000M
	I0805 10:48:10.386164    9606 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:48:10.386179    9606 main.go:141] libmachine: STDERR: 
	I0805 10:48:10.386189    9606 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2
	I0805 10:48:10.386192    9606 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:48:10.386205    9606 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:48:10.386263    9606 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:61:b5:18:a6:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/docker-flags-804000/disk.qcow2
	I0805 10:48:10.387901    9606 main.go:141] libmachine: STDOUT: 
	I0805 10:48:10.387916    9606 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:48:10.387928    9606 client.go:171] duration metric: took 532.448ms to LocalClient.Create
	I0805 10:48:12.390052    9606 start.go:128] duration metric: took 2.596486166s to createHost
	I0805 10:48:12.390116    9606 start.go:83] releasing machines lock for "docker-flags-804000", held for 2.597027791s
	W0805 10:48:12.390573    9606 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-804000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-804000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:12.402151    9606 out.go:177] 
	W0805 10:48:12.406245    9606 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:48:12.406270    9606 out.go:239] * 
	* 
	W0805 10:48:12.409149    9606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:48:12.415969    9606 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-804000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-804000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-804000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.221542ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-804000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-804000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-804000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-804000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-804000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-804000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-804000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-804000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-804000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.766458ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-804000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-804000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-804000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-804000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-804000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-804000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-05 10:48:12.554793 -0700 PDT m=+1363.272816501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-804000 -n docker-flags-804000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-804000 -n docker-flags-804000: exit status 7 (30.181459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-804000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-804000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-804000
--- FAIL: TestDockerFlags (10.47s)

                                                
                                    
x
+
TestForceSystemdFlag (11.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-262000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-262000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.816871417s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-262000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-262000" primary control-plane node in "force-systemd-flag-262000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-262000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:47:27.150521    9445 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:47:27.150659    9445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:47:27.150663    9445 out.go:304] Setting ErrFile to fd 2...
	I0805 10:47:27.150665    9445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:47:27.150792    9445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:47:27.151861    9445 out.go:298] Setting JSON to false
	I0805 10:47:27.167923    9445 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6417,"bootTime":1722873630,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:47:27.167985    9445 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:47:27.172215    9445 out.go:177] * [force-systemd-flag-262000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:47:27.179255    9445 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:47:27.179313    9445 notify.go:220] Checking for updates...
	I0805 10:47:27.186211    9445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:47:27.189231    9445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:47:27.192193    9445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:47:27.195201    9445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:47:27.198113    9445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:47:27.201534    9445 config.go:182] Loaded profile config "NoKubernetes-542000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:47:27.201608    9445 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:47:27.201663    9445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:47:27.206173    9445 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:47:27.213325    9445 start.go:297] selected driver: qemu2
	I0805 10:47:27.213333    9445 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:47:27.213339    9445 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:47:27.215673    9445 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:47:27.218157    9445 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:47:27.221296    9445 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 10:47:27.221339    9445 cni.go:84] Creating CNI manager for ""
	I0805 10:47:27.221346    9445 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:47:27.221350    9445 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:47:27.221381    9445 start.go:340] cluster config:
	{Name:force-systemd-flag-262000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:47:27.225092    9445 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:47:27.232235    9445 out.go:177] * Starting "force-systemd-flag-262000" primary control-plane node in "force-systemd-flag-262000" cluster
	I0805 10:47:27.236159    9445 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:47:27.236175    9445 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:47:27.236183    9445 cache.go:56] Caching tarball of preloaded images
	I0805 10:47:27.236237    9445 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:47:27.236242    9445 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:47:27.236288    9445 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/force-systemd-flag-262000/config.json ...
	I0805 10:47:27.236298    9445 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/force-systemd-flag-262000/config.json: {Name:mkbd94da69ed330777ab75bcb129c80ea1e732f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:47:27.236554    9445 start.go:360] acquireMachinesLock for force-systemd-flag-262000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:47:28.113932    9445 start.go:364] duration metric: took 877.364375ms to acquireMachinesLock for "force-systemd-flag-262000"
	I0805 10:47:28.114058    9445 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:47:28.114535    9445 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:47:28.119849    9445 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 10:47:28.160662    9445 start.go:159] libmachine.API.Create for "force-systemd-flag-262000" (driver="qemu2")
	I0805 10:47:28.160712    9445 client.go:168] LocalClient.Create starting
	I0805 10:47:28.160846    9445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:47:28.160913    9445 main.go:141] libmachine: Decoding PEM data...
	I0805 10:47:28.160928    9445 main.go:141] libmachine: Parsing certificate...
	I0805 10:47:28.161001    9445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:47:28.161046    9445 main.go:141] libmachine: Decoding PEM data...
	I0805 10:47:28.161062    9445 main.go:141] libmachine: Parsing certificate...
	I0805 10:47:28.161746    9445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:47:28.329000    9445 main.go:141] libmachine: Creating SSH key...
	I0805 10:47:28.418621    9445 main.go:141] libmachine: Creating Disk image...
	I0805 10:47:28.418627    9445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:47:28.418825    9445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2
	I0805 10:47:28.428261    9445 main.go:141] libmachine: STDOUT: 
	I0805 10:47:28.428281    9445 main.go:141] libmachine: STDERR: 
	I0805 10:47:28.428333    9445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2 +20000M
	I0805 10:47:28.436034    9445 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:47:28.436048    9445 main.go:141] libmachine: STDERR: 
	I0805 10:47:28.436067    9445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2
	I0805 10:47:28.436075    9445 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:47:28.436085    9445 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:47:28.436113    9445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:de:0d:ea:7c:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2
	I0805 10:47:28.437703    9445 main.go:141] libmachine: STDOUT: 
	I0805 10:47:28.437714    9445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:47:28.437731    9445 client.go:171] duration metric: took 277.014375ms to LocalClient.Create
	I0805 10:47:30.439906    9445 start.go:128] duration metric: took 2.325370542s to createHost
	I0805 10:47:30.439953    9445 start.go:83] releasing machines lock for "force-systemd-flag-262000", held for 2.326011083s
	W0805 10:47:30.440021    9445 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:47:30.457972    9445 out.go:177] * Deleting "force-systemd-flag-262000" in qemu2 ...
	W0805 10:47:30.489722    9445 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:47:30.489748    9445 start.go:729] Will try again in 5 seconds ...
	I0805 10:47:35.490007    9445 start.go:360] acquireMachinesLock for force-systemd-flag-262000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:47:35.490368    9445 start.go:364] duration metric: took 288.083µs to acquireMachinesLock for "force-systemd-flag-262000"
	I0805 10:47:35.490491    9445 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-262000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-262000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:47:35.490683    9445 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:47:35.509282    9445 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 10:47:35.559764    9445 start.go:159] libmachine.API.Create for "force-systemd-flag-262000" (driver="qemu2")
	I0805 10:47:35.559825    9445 client.go:168] LocalClient.Create starting
	I0805 10:47:35.559936    9445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:47:35.559989    9445 main.go:141] libmachine: Decoding PEM data...
	I0805 10:47:35.560008    9445 main.go:141] libmachine: Parsing certificate...
	I0805 10:47:35.560065    9445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:47:35.560100    9445 main.go:141] libmachine: Decoding PEM data...
	I0805 10:47:35.560111    9445 main.go:141] libmachine: Parsing certificate...
	I0805 10:47:35.560608    9445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:47:35.805681    9445 main.go:141] libmachine: Creating SSH key...
	I0805 10:47:35.871763    9445 main.go:141] libmachine: Creating Disk image...
	I0805 10:47:35.871769    9445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:47:35.871965    9445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2
	I0805 10:47:35.881310    9445 main.go:141] libmachine: STDOUT: 
	I0805 10:47:35.881325    9445 main.go:141] libmachine: STDERR: 
	I0805 10:47:35.881368    9445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2 +20000M
	I0805 10:47:35.889426    9445 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:47:35.889438    9445 main.go:141] libmachine: STDERR: 
	I0805 10:47:35.889448    9445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2
	I0805 10:47:35.889458    9445 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:47:35.889469    9445 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:47:35.889491    9445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:f0:63:d0:d5:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-flag-262000/disk.qcow2
	I0805 10:47:35.891059    9445 main.go:141] libmachine: STDOUT: 
	I0805 10:47:35.891073    9445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:47:35.891086    9445 client.go:171] duration metric: took 331.260375ms to LocalClient.Create
	I0805 10:47:37.893331    9445 start.go:128] duration metric: took 2.402618s to createHost
	I0805 10:47:37.893453    9445 start.go:83] releasing machines lock for "force-systemd-flag-262000", held for 2.403094834s
	W0805 10:47:37.893785    9445 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-262000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:47:37.902282    9445 out.go:177] 
	W0805 10:47:37.913295    9445 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:47:37.913321    9445 out.go:239] * 
	* 
	W0805 10:47:37.916698    9445 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:47:37.926180    9445 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-262000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-262000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-262000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (73.795875ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-262000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-262000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-262000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-05 10:47:38.014574 -0700 PDT m=+1328.732136459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-262000 -n force-systemd-flag-262000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-262000 -n force-systemd-flag-262000: exit status 7 (32.592834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-262000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-262000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-262000
--- FAIL: TestForceSystemdFlag (11.01s)

                                                
                                    
x
+
TestForceSystemdEnv (10.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-989000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-989000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.285593375s)

                                                
                                                
-- stdout --
	* [force-systemd-env-989000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-989000" primary control-plane node in "force-systemd-env-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:47:51.740679    9566 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:47:51.740818    9566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:47:51.740821    9566 out.go:304] Setting ErrFile to fd 2...
	I0805 10:47:51.740827    9566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:47:51.740958    9566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:47:51.742000    9566 out.go:298] Setting JSON to false
	I0805 10:47:51.757858    9566 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6441,"bootTime":1722873630,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:47:51.757935    9566 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:47:51.763278    9566 out.go:177] * [force-systemd-env-989000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:47:51.770120    9566 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:47:51.770180    9566 notify.go:220] Checking for updates...
	I0805 10:47:51.779167    9566 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:47:51.785153    9566 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:47:51.796179    9566 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:47:51.803117    9566 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:47:51.810172    9566 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0805 10:47:51.813553    9566 config.go:182] Loaded profile config "NoKubernetes-542000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0805 10:47:51.813633    9566 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:47:51.813694    9566 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:47:51.818094    9566 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:47:51.825164    9566 start.go:297] selected driver: qemu2
	I0805 10:47:51.825170    9566 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:47:51.825175    9566 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:47:51.827598    9566 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:47:51.831186    9566 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:47:51.835063    9566 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 10:47:51.835082    9566 cni.go:84] Creating CNI manager for ""
	I0805 10:47:51.835090    9566 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:47:51.835104    9566 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:47:51.835140    9566 start.go:340] cluster config:
	{Name:force-systemd-env-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:47:51.839118    9566 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:47:51.842243    9566 out.go:177] * Starting "force-systemd-env-989000" primary control-plane node in "force-systemd-env-989000" cluster
	I0805 10:47:51.849188    9566 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:47:51.849205    9566 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:47:51.849226    9566 cache.go:56] Caching tarball of preloaded images
	I0805 10:47:51.849285    9566 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:47:51.849292    9566 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:47:51.849370    9566 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/force-systemd-env-989000/config.json ...
	I0805 10:47:51.849383    9566 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/force-systemd-env-989000/config.json: {Name:mk68dd3f0c2b8b520806813cab16daa20ca04aaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:47:51.849709    9566 start.go:360] acquireMachinesLock for force-systemd-env-989000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:47:51.849751    9566 start.go:364] duration metric: took 29.083µs to acquireMachinesLock for "force-systemd-env-989000"
	I0805 10:47:51.849763    9566 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:47:51.849799    9566 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:47:51.857957    9566 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 10:47:51.877476    9566 start.go:159] libmachine.API.Create for "force-systemd-env-989000" (driver="qemu2")
	I0805 10:47:51.877501    9566 client.go:168] LocalClient.Create starting
	I0805 10:47:51.877574    9566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:47:51.877622    9566 main.go:141] libmachine: Decoding PEM data...
	I0805 10:47:51.877633    9566 main.go:141] libmachine: Parsing certificate...
	I0805 10:47:51.877669    9566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:47:51.877695    9566 main.go:141] libmachine: Decoding PEM data...
	I0805 10:47:51.877705    9566 main.go:141] libmachine: Parsing certificate...
	I0805 10:47:51.878221    9566 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:47:52.034624    9566 main.go:141] libmachine: Creating SSH key...
	I0805 10:47:52.148885    9566 main.go:141] libmachine: Creating Disk image...
	I0805 10:47:52.148891    9566 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:47:52.149118    9566 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2
	I0805 10:47:52.158174    9566 main.go:141] libmachine: STDOUT: 
	I0805 10:47:52.158191    9566 main.go:141] libmachine: STDERR: 
	I0805 10:47:52.158234    9566 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2 +20000M
	I0805 10:47:52.165923    9566 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:47:52.165943    9566 main.go:141] libmachine: STDERR: 
	I0805 10:47:52.165963    9566 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2
	I0805 10:47:52.165967    9566 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:47:52.165977    9566 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:47:52.166007    9566 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d9:e6:6b:47:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2
	I0805 10:47:52.167620    9566 main.go:141] libmachine: STDOUT: 
	I0805 10:47:52.167634    9566 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:47:52.167650    9566 client.go:171] duration metric: took 290.145375ms to LocalClient.Create
	I0805 10:47:54.169803    9566 start.go:128] duration metric: took 2.320011375s to createHost
	I0805 10:47:54.169881    9566 start.go:83] releasing machines lock for "force-systemd-env-989000", held for 2.320148292s
	W0805 10:47:54.170050    9566 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:47:54.181285    9566 out.go:177] * Deleting "force-systemd-env-989000" in qemu2 ...
	W0805 10:47:54.211940    9566 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:47:54.211970    9566 start.go:729] Will try again in 5 seconds ...
	I0805 10:47:59.214059    9566 start.go:360] acquireMachinesLock for force-systemd-env-989000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:47:59.619660    9566 start.go:364] duration metric: took 405.505041ms to acquireMachinesLock for "force-systemd-env-989000"
	I0805 10:47:59.619814    9566 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:47:59.620164    9566 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:47:59.633855    9566 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 10:47:59.681792    9566 start.go:159] libmachine.API.Create for "force-systemd-env-989000" (driver="qemu2")
	I0805 10:47:59.681838    9566 client.go:168] LocalClient.Create starting
	I0805 10:47:59.681964    9566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:47:59.682024    9566 main.go:141] libmachine: Decoding PEM data...
	I0805 10:47:59.682038    9566 main.go:141] libmachine: Parsing certificate...
	I0805 10:47:59.682101    9566 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:47:59.682144    9566 main.go:141] libmachine: Decoding PEM data...
	I0805 10:47:59.682154    9566 main.go:141] libmachine: Parsing certificate...
	I0805 10:47:59.682752    9566 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:47:59.849141    9566 main.go:141] libmachine: Creating SSH key...
	I0805 10:47:59.927745    9566 main.go:141] libmachine: Creating Disk image...
	I0805 10:47:59.927752    9566 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:47:59.927963    9566 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2
	I0805 10:47:59.937142    9566 main.go:141] libmachine: STDOUT: 
	I0805 10:47:59.937157    9566 main.go:141] libmachine: STDERR: 
	I0805 10:47:59.937208    9566 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2 +20000M
	I0805 10:47:59.945056    9566 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:47:59.945068    9566 main.go:141] libmachine: STDERR: 
	I0805 10:47:59.945080    9566 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2
	I0805 10:47:59.945084    9566 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:47:59.945100    9566 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:47:59.945145    9566 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:bd:75:69:70:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/force-systemd-env-989000/disk.qcow2
	I0805 10:47:59.946889    9566 main.go:141] libmachine: STDOUT: 
	I0805 10:47:59.946902    9566 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:47:59.946918    9566 client.go:171] duration metric: took 265.076959ms to LocalClient.Create
	I0805 10:48:01.949073    9566 start.go:128] duration metric: took 2.328892584s to createHost
	I0805 10:48:01.949128    9566 start.go:83] releasing machines lock for "force-systemd-env-989000", held for 2.329441792s
	W0805 10:48:01.949448    9566 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:01.968052    9566 out.go:177] 
	W0805 10:48:01.971944    9566 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:48:01.971968    9566 out.go:239] * 
	* 
	W0805 10:48:01.974894    9566 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:48:01.984895    9566 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-989000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-989000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-989000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.469625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-989000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-989000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-989000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-05 10:48:02.08209 -0700 PDT m=+1352.799973168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-989000 -n force-systemd-env-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-989000 -n force-systemd-env-989000: exit status 7 (33.633333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-989000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-989000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-989000
--- FAIL: TestForceSystemdEnv (10.48s)

                                                
                                    
x
+
TestErrorSpam/setup (9.89s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-159000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-159000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 --driver=qemu2 : exit status 80 (9.882891667s)

                                                
                                                
-- stdout --
	* [nospam-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-159000" primary control-plane node in "nospam-159000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-159000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-159000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-159000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-159000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19374
- KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-159000" primary control-plane node in "nospam-159000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-159000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-159000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.89s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.91s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.842854458s)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51015 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51015 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51015 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-908000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
- MINIKUBE_LOCATION=19374
- KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-908000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51015 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51015 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51015 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (68.27875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.91s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --alsologtostderr -v=8: exit status 80 (5.18631125s)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:26:51.072273    7277 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:26:51.072420    7277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:26:51.072423    7277 out.go:304] Setting ErrFile to fd 2...
	I0805 10:26:51.072425    7277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:26:51.072578    7277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:26:51.073544    7277 out.go:298] Setting JSON to false
	I0805 10:26:51.089613    7277 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5181,"bootTime":1722873630,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:26:51.089681    7277 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:26:51.094493    7277 out.go:177] * [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:26:51.102540    7277 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:26:51.102622    7277 notify.go:220] Checking for updates...
	I0805 10:26:51.109477    7277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:26:51.112478    7277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:26:51.115440    7277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:26:51.118472    7277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:26:51.121569    7277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:26:51.124712    7277 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:26:51.124765    7277 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:26:51.128448    7277 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:26:51.135405    7277 start.go:297] selected driver: qemu2
	I0805 10:26:51.135410    7277 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:26:51.135483    7277 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:26:51.137901    7277 cni.go:84] Creating CNI manager for ""
	I0805 10:26:51.137917    7277 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:26:51.137956    7277 start.go:340] cluster config:
	{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:26:51.141590    7277 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:26:51.150472    7277 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	I0805 10:26:51.154475    7277 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:26:51.154494    7277 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:26:51.154504    7277 cache.go:56] Caching tarball of preloaded images
	I0805 10:26:51.154570    7277 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:26:51.154576    7277 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:26:51.154632    7277 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/functional-908000/config.json ...
	I0805 10:26:51.155144    7277 start.go:360] acquireMachinesLock for functional-908000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:26:51.155179    7277 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "functional-908000"
	I0805 10:26:51.155189    7277 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:26:51.155195    7277 fix.go:54] fixHost starting: 
	I0805 10:26:51.155329    7277 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0805 10:26:51.155339    7277 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:26:51.159562    7277 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0805 10:26:51.167444    7277 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:26:51.167482    7277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:78:e9:6e:95:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/disk.qcow2
	I0805 10:26:51.169382    7277 main.go:141] libmachine: STDOUT: 
	I0805 10:26:51.169399    7277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:26:51.169426    7277 fix.go:56] duration metric: took 14.230959ms for fixHost
	I0805 10:26:51.169432    7277 start.go:83] releasing machines lock for "functional-908000", held for 14.248708ms
	W0805 10:26:51.169437    7277 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:26:51.169475    7277 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:26:51.169480    7277 start.go:729] Will try again in 5 seconds ...
	I0805 10:26:56.171683    7277 start.go:360] acquireMachinesLock for functional-908000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:26:56.172221    7277 start.go:364] duration metric: took 420.75µs to acquireMachinesLock for "functional-908000"
	I0805 10:26:56.172332    7277 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:26:56.172352    7277 fix.go:54] fixHost starting: 
	I0805 10:26:56.173065    7277 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0805 10:26:56.173093    7277 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:26:56.179692    7277 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0805 10:26:56.183456    7277 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:26:56.183664    7277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:78:e9:6e:95:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/disk.qcow2
	I0805 10:26:56.193584    7277 main.go:141] libmachine: STDOUT: 
	I0805 10:26:56.193654    7277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:26:56.193789    7277 fix.go:56] duration metric: took 21.436125ms for fixHost
	I0805 10:26:56.193816    7277 start.go:83] releasing machines lock for "functional-908000", held for 21.569ms
	W0805 10:26:56.193998    7277 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:26:56.201376    7277 out.go:177] 
	W0805 10:26:56.205493    7277 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:26:56.205518    7277 out.go:239] * 
	* 
	W0805 10:26:56.208405    7277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:26:56.215388    7277 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-908000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.1880405s for "functional-908000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (67.110542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.809125ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-908000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (30.069709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-908000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-908000 get po -A: exit status 1 (26.85075ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-908000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-908000\n"*: args "kubectl --context functional-908000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-908000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (30.3575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl images: exit status 83 (48.039833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (38.801584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-908000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (37.850042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.00975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-908000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 kubectl -- --context functional-908000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 kubectl -- --context functional-908000 get pods: exit status 1 (708.640416ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-908000
	* no server found for cluster "functional-908000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-908000 kubectl -- --context functional-908000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (31.337042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.74s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-908000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-908000 get pods: exit status 1 (943.688042ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-908000
	* no server found for cluster "functional-908000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-908000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (29.78675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.97s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.21s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.175269125s)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-908000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-908000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.175444291s for "functional-908000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (32.902584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.21s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-908000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-908000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (35.112917ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-908000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (29.063666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 logs: exit status 83 (79.683875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
	|         | -p download-only-834000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
	| delete  | -p download-only-834000                                                  | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
	| start   | -o=json --download-only                                                  | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
	|         | -p download-only-998000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
	| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
	| start   | -o=json --download-only                                                  | download-only-689000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
	|         | -p download-only-689000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	| delete  | -p download-only-689000                                                  | download-only-689000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	| delete  | -p download-only-834000                                                  | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	| delete  | -p download-only-689000                                                  | download-only-689000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	| start   | --download-only -p                                                       | binary-mirror-383000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | binary-mirror-383000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:50983                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-383000                                                  | binary-mirror-383000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	| addons  | enable dashboard -p                                                      | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | addons-690000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | addons-690000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-690000 --wait=true                                             | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-690000                                                         | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	| start   | -p nospam-159000 -n=1 --memory=2250 --wait=false                         | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-159000                                                         | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
	| cache   | functional-908000 cache delete                                           | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	| ssh     | functional-908000 ssh sudo                                               | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-908000                                                        | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-908000 cache reload                                           | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-908000 kubectl --                                             | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
	|         | --context functional-908000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:27 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 10:27:01
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 10:27:01.211218    7353 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:27:01.211342    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:01.211344    7353 out.go:304] Setting ErrFile to fd 2...
	I0805 10:27:01.211346    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:01.211460    7353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:27:01.212510    7353 out.go:298] Setting JSON to false
	I0805 10:27:01.228572    7353 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5191,"bootTime":1722873630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:27:01.228637    7353 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:27:01.234251    7353 out.go:177] * [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:27:01.243134    7353 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:27:01.243187    7353 notify.go:220] Checking for updates...
	I0805 10:27:01.249144    7353 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:27:01.252118    7353 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:27:01.253463    7353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:27:01.256137    7353 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:27:01.259169    7353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:27:01.262498    7353 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:27:01.262551    7353 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:27:01.267081    7353 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:27:01.274126    7353 start.go:297] selected driver: qemu2
	I0805 10:27:01.274131    7353 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:27:01.274201    7353 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:27:01.276398    7353 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:27:01.276415    7353 cni.go:84] Creating CNI manager for ""
	I0805 10:27:01.276423    7353 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:27:01.276469    7353 start.go:340] cluster config:
	{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:27:01.279888    7353 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:27:01.286093    7353 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
	I0805 10:27:01.290156    7353 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:27:01.290170    7353 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:27:01.290189    7353 cache.go:56] Caching tarball of preloaded images
	I0805 10:27:01.290255    7353 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:27:01.290259    7353 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:27:01.290316    7353 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/functional-908000/config.json ...
	I0805 10:27:01.290635    7353 start.go:360] acquireMachinesLock for functional-908000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:27:01.290667    7353 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "functional-908000"
	I0805 10:27:01.290674    7353 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:27:01.290678    7353 fix.go:54] fixHost starting: 
	I0805 10:27:01.290790    7353 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0805 10:27:01.290797    7353 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:27:01.300122    7353 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0805 10:27:01.306136    7353 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:27:01.306175    7353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:78:e9:6e:95:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/disk.qcow2
	I0805 10:27:01.308115    7353 main.go:141] libmachine: STDOUT: 
	I0805 10:27:01.308133    7353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:27:01.308163    7353 fix.go:56] duration metric: took 17.486083ms for fixHost
	I0805 10:27:01.308165    7353 start.go:83] releasing machines lock for "functional-908000", held for 17.496334ms
	W0805 10:27:01.308172    7353 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:27:01.308209    7353 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:27:01.308213    7353 start.go:729] Will try again in 5 seconds ...
	I0805 10:27:06.310231    7353 start.go:360] acquireMachinesLock for functional-908000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:27:06.310509    7353 start.go:364] duration metric: took 195.041µs to acquireMachinesLock for "functional-908000"
	I0805 10:27:06.310594    7353 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:27:06.310601    7353 fix.go:54] fixHost starting: 
	I0805 10:27:06.311004    7353 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
	W0805 10:27:06.311013    7353 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:27:06.314385    7353 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
	I0805 10:27:06.323362    7353 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:27:06.323448    7353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:78:e9:6e:95:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/disk.qcow2
	I0805 10:27:06.327486    7353 main.go:141] libmachine: STDOUT: 
	I0805 10:27:06.327509    7353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:27:06.327559    7353 fix.go:56] duration metric: took 16.959042ms for fixHost
	I0805 10:27:06.327563    7353 start.go:83] releasing machines lock for "functional-908000", held for 17.02375ms
	W0805 10:27:06.327630    7353 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:27:06.334378    7353 out.go:177] 
	W0805 10:27:06.338385    7353 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:27:06.338394    7353 out.go:239] * 
	W0805 10:27:06.339191    7353 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:27:06.350353    7353 out.go:177] 
	
	
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-908000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
|         | -p download-only-834000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
| delete  | -p download-only-834000                                                  | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
| start   | -o=json --download-only                                                  | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
|         | -p download-only-998000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
| start   | -o=json --download-only                                                  | download-only-689000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
|         | -p download-only-689000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| delete  | -p download-only-689000                                                  | download-only-689000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| delete  | -p download-only-834000                                                  | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| delete  | -p download-only-689000                                                  | download-only-689000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| start   | --download-only -p                                                       | binary-mirror-383000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | binary-mirror-383000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50983                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-383000                                                  | binary-mirror-383000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| addons  | enable dashboard -p                                                      | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | addons-690000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | addons-690000                                                            |                      |         |         |                     |                     |
| start   | -p addons-690000 --wait=true                                             | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-690000                                                         | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| start   | -p nospam-159000 -n=1 --memory=2250 --wait=false                         | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-159000                                                         | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
| cache   | functional-908000 cache delete                                           | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| ssh     | functional-908000 ssh sudo                                               | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-908000                                                        | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-908000 cache reload                                           | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-908000 kubectl --                                             | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | --context functional-908000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:27 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/05 10:27:01
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0805 10:27:01.211218    7353 out.go:291] Setting OutFile to fd 1 ...
I0805 10:27:01.211342    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:01.211344    7353 out.go:304] Setting ErrFile to fd 2...
I0805 10:27:01.211346    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:01.211460    7353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
I0805 10:27:01.212510    7353 out.go:298] Setting JSON to false
I0805 10:27:01.228572    7353 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5191,"bootTime":1722873630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0805 10:27:01.228637    7353 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0805 10:27:01.234251    7353 out.go:177] * [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0805 10:27:01.243134    7353 out.go:177]   - MINIKUBE_LOCATION=19374
I0805 10:27:01.243187    7353 notify.go:220] Checking for updates...
I0805 10:27:01.249144    7353 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
I0805 10:27:01.252118    7353 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0805 10:27:01.253463    7353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0805 10:27:01.256137    7353 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
I0805 10:27:01.259169    7353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0805 10:27:01.262498    7353 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:27:01.262551    7353 driver.go:392] Setting default libvirt URI to qemu:///system
I0805 10:27:01.267081    7353 out.go:177] * Using the qemu2 driver based on existing profile
I0805 10:27:01.274126    7353 start.go:297] selected driver: qemu2
I0805 10:27:01.274131    7353 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 10:27:01.274201    7353 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0805 10:27:01.276398    7353 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0805 10:27:01.276415    7353 cni.go:84] Creating CNI manager for ""
I0805 10:27:01.276423    7353 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 10:27:01.276469    7353 start.go:340] cluster config:
{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 10:27:01.279888    7353 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0805 10:27:01.286093    7353 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
I0805 10:27:01.290156    7353 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0805 10:27:01.290170    7353 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0805 10:27:01.290189    7353 cache.go:56] Caching tarball of preloaded images
I0805 10:27:01.290255    7353 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0805 10:27:01.290259    7353 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0805 10:27:01.290316    7353 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/functional-908000/config.json ...
I0805 10:27:01.290635    7353 start.go:360] acquireMachinesLock for functional-908000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 10:27:01.290667    7353 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "functional-908000"
I0805 10:27:01.290674    7353 start.go:96] Skipping create...Using existing machine configuration
I0805 10:27:01.290678    7353 fix.go:54] fixHost starting: 
I0805 10:27:01.290790    7353 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0805 10:27:01.290797    7353 fix.go:138] unexpected machine state, will restart: <nil>
I0805 10:27:01.300122    7353 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0805 10:27:01.306136    7353 qemu.go:418] Using hvf for hardware acceleration
I0805 10:27:01.306175    7353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:78:e9:6e:95:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/disk.qcow2
I0805 10:27:01.308115    7353 main.go:141] libmachine: STDOUT: 
I0805 10:27:01.308133    7353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0805 10:27:01.308163    7353 fix.go:56] duration metric: took 17.486083ms for fixHost
I0805 10:27:01.308165    7353 start.go:83] releasing machines lock for "functional-908000", held for 17.496334ms
W0805 10:27:01.308172    7353 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0805 10:27:01.308209    7353 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0805 10:27:01.308213    7353 start.go:729] Will try again in 5 seconds ...
I0805 10:27:06.310231    7353 start.go:360] acquireMachinesLock for functional-908000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 10:27:06.310509    7353 start.go:364] duration metric: took 195.041µs to acquireMachinesLock for "functional-908000"
I0805 10:27:06.310594    7353 start.go:96] Skipping create...Using existing machine configuration
I0805 10:27:06.310601    7353 fix.go:54] fixHost starting: 
I0805 10:27:06.311004    7353 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0805 10:27:06.311013    7353 fix.go:138] unexpected machine state, will restart: <nil>
I0805 10:27:06.314385    7353 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0805 10:27:06.323362    7353 qemu.go:418] Using hvf for hardware acceleration
I0805 10:27:06.323448    7353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:78:e9:6e:95:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/disk.qcow2
I0805 10:27:06.327486    7353 main.go:141] libmachine: STDOUT: 
I0805 10:27:06.327509    7353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0805 10:27:06.327559    7353 fix.go:56] duration metric: took 16.959042ms for fixHost
I0805 10:27:06.327563    7353 start.go:83] releasing machines lock for "functional-908000", held for 17.02375ms
W0805 10:27:06.327630    7353 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0805 10:27:06.334378    7353 out.go:177] 
W0805 10:27:06.338385    7353 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0805 10:27:06.338394    7353 out.go:239] * 
W0805 10:27:06.339191    7353 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0805 10:27:06.350353    7353 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2984733348/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
|         | -p download-only-834000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
| delete  | -p download-only-834000                                                  | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
| start   | -o=json --download-only                                                  | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
|         | -p download-only-998000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
| start   | -o=json --download-only                                                  | download-only-689000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
|         | -p download-only-689000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.0-rc.0                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| delete  | -p download-only-689000                                                  | download-only-689000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| delete  | -p download-only-834000                                                  | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| delete  | -p download-only-998000                                                  | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| delete  | -p download-only-689000                                                  | download-only-689000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| start   | --download-only -p                                                       | binary-mirror-383000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | binary-mirror-383000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:50983                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-383000                                                  | binary-mirror-383000 | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| addons  | enable dashboard -p                                                      | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | addons-690000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | addons-690000                                                            |                      |         |         |                     |                     |
| start   | -p addons-690000 --wait=true                                             | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-690000                                                         | addons-690000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| start   | -p nospam-159000 -n=1 --memory=2250 --wait=false                         | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-159000 --log_dir                                                  | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-159000                                                         | nospam-159000        | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-908000 cache add                                              | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
| cache   | functional-908000 cache delete                                           | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | minikube-local-cache-test:functional-908000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| ssh     | functional-908000 ssh sudo                                               | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-908000                                                        | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-908000 cache reload                                           | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
| ssh     | functional-908000 ssh                                                    | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT | 05 Aug 24 10:26 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-908000 kubectl --                                             | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:26 PDT |                     |
|         | --context functional-908000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-908000                                                     | functional-908000    | jenkins | v1.33.1 | 05 Aug 24 10:27 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/08/05 10:27:01
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.5 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0805 10:27:01.211218    7353 out.go:291] Setting OutFile to fd 1 ...
I0805 10:27:01.211342    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:01.211344    7353 out.go:304] Setting ErrFile to fd 2...
I0805 10:27:01.211346    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:01.211460    7353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
I0805 10:27:01.212510    7353 out.go:298] Setting JSON to false
I0805 10:27:01.228572    7353 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5191,"bootTime":1722873630,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0805 10:27:01.228637    7353 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0805 10:27:01.234251    7353 out.go:177] * [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
I0805 10:27:01.243134    7353 out.go:177]   - MINIKUBE_LOCATION=19374
I0805 10:27:01.243187    7353 notify.go:220] Checking for updates...
I0805 10:27:01.249144    7353 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
I0805 10:27:01.252118    7353 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0805 10:27:01.253463    7353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0805 10:27:01.256137    7353 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
I0805 10:27:01.259169    7353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0805 10:27:01.262498    7353 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:27:01.262551    7353 driver.go:392] Setting default libvirt URI to qemu:///system
I0805 10:27:01.267081    7353 out.go:177] * Using the qemu2 driver based on existing profile
I0805 10:27:01.274126    7353 start.go:297] selected driver: qemu2
I0805 10:27:01.274131    7353 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 10:27:01.274201    7353 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0805 10:27:01.276398    7353 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0805 10:27:01.276415    7353 cni.go:84] Creating CNI manager for ""
I0805 10:27:01.276423    7353 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0805 10:27:01.276469    7353 start.go:340] cluster config:
{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0805 10:27:01.279888    7353 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0805 10:27:01.286093    7353 out.go:177] * Starting "functional-908000" primary control-plane node in "functional-908000" cluster
I0805 10:27:01.290156    7353 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0805 10:27:01.290170    7353 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0805 10:27:01.290189    7353 cache.go:56] Caching tarball of preloaded images
I0805 10:27:01.290255    7353 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0805 10:27:01.290259    7353 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0805 10:27:01.290316    7353 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/functional-908000/config.json ...
I0805 10:27:01.290635    7353 start.go:360] acquireMachinesLock for functional-908000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 10:27:01.290667    7353 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "functional-908000"
I0805 10:27:01.290674    7353 start.go:96] Skipping create...Using existing machine configuration
I0805 10:27:01.290678    7353 fix.go:54] fixHost starting: 
I0805 10:27:01.290790    7353 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0805 10:27:01.290797    7353 fix.go:138] unexpected machine state, will restart: <nil>
I0805 10:27:01.300122    7353 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0805 10:27:01.306136    7353 qemu.go:418] Using hvf for hardware acceleration
I0805 10:27:01.306175    7353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:78:e9:6e:95:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/disk.qcow2
I0805 10:27:01.308115    7353 main.go:141] libmachine: STDOUT: 
I0805 10:27:01.308133    7353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0805 10:27:01.308163    7353 fix.go:56] duration metric: took 17.486083ms for fixHost
I0805 10:27:01.308165    7353 start.go:83] releasing machines lock for "functional-908000", held for 17.496334ms
W0805 10:27:01.308172    7353 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0805 10:27:01.308209    7353 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0805 10:27:01.308213    7353 start.go:729] Will try again in 5 seconds ...
I0805 10:27:06.310231    7353 start.go:360] acquireMachinesLock for functional-908000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0805 10:27:06.310509    7353 start.go:364] duration metric: took 195.041µs to acquireMachinesLock for "functional-908000"
I0805 10:27:06.310594    7353 start.go:96] Skipping create...Using existing machine configuration
I0805 10:27:06.310601    7353 fix.go:54] fixHost starting: 
I0805 10:27:06.311004    7353 fix.go:112] recreateIfNeeded on functional-908000: state=Stopped err=<nil>
W0805 10:27:06.311013    7353 fix.go:138] unexpected machine state, will restart: <nil>
I0805 10:27:06.314385    7353 out.go:177] * Restarting existing qemu2 VM for "functional-908000" ...
I0805 10:27:06.323362    7353 qemu.go:418] Using hvf for hardware acceleration
I0805 10:27:06.323448    7353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:78:e9:6e:95:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/functional-908000/disk.qcow2
I0805 10:27:06.327486    7353 main.go:141] libmachine: STDOUT: 
I0805 10:27:06.327509    7353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0805 10:27:06.327559    7353 fix.go:56] duration metric: took 16.959042ms for fixHost
I0805 10:27:06.327563    7353 start.go:83] releasing machines lock for "functional-908000", held for 17.02375ms
W0805 10:27:06.327630    7353 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-908000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0805 10:27:06.334378    7353 out.go:177] 
W0805 10:27:06.338385    7353 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0805 10:27:06.338394    7353 out.go:239] * 
W0805 10:27:06.339191    7353 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0805 10:27:06.350353    7353 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-908000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-908000 apply -f testdata/invalidsvc.yaml: exit status 1 (28.461667ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-908000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-908000 --alsologtostderr -v=1] stderr:
I0805 10:27:45.026388    7559 out.go:291] Setting OutFile to fd 1 ...
I0805 10:27:45.026997    7559 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:45.027001    7559 out.go:304] Setting ErrFile to fd 2...
I0805 10:27:45.027004    7559 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:45.027193    7559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
I0805 10:27:45.027392    7559 mustload.go:65] Loading cluster: functional-908000
I0805 10:27:45.027573    7559 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:27:45.031632    7559 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
I0805 10:27:45.035680    7559 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (41.05125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 status: exit status 7 (73.157709ms)

                                                
                                                
-- stdout --
	functional-908000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-908000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.672125ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-908000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 status -o json: exit status 7 (29.739375ms)

                                                
                                                
-- stdout --
	{"Name":"functional-908000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-908000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (30.766333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-908000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-908000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.593625ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-908000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-908000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-908000 describe po hello-node-connect: exit status 1 (26.671ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-908000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-908000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-908000 logs -l app=hello-node-connect: exit status 1 (26.861584ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-908000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-908000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-908000 describe svc hello-node-connect: exit status 1 (26.632416ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-908000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (29.962416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-908000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (32.735292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "echo hello": exit status 83 (45.567208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n"*. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "cat /etc/hostname": exit status 83 (38.901ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-908000"- but got *"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n"*. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (39.034125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (55.774083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.816125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-908000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-908000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cp functional-908000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd805877839/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 cp functional-908000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd805877839/001/cp-test.txt: exit status 83 (48.629042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 cp functional-908000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd805877839/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /home/docker/cp-test.txt": exit status 83 (40.911333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd805877839/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (44.755209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (53.103042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-908000 ssh -n functional-908000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-908000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-908000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/7007/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/test/nested/copy/7007/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/test/nested/copy/7007/hosts": exit status 83 (39.4975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/test/nested/copy/7007/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-908000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-908000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (29.312916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/7007.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/7007.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/7007.pem": exit status 83 (42.850375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/7007.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/7007.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7007.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/7007.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/7007.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/7007.pem": exit status 83 (49.999ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/7007.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /usr/share/ca-certificates/7007.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7007.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (42.608ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/70072.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/70072.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/70072.pem": exit status 83 (40.765333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/70072.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/70072.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/70072.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/70072.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/70072.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /usr/share/ca-certificates/70072.pem": exit status 83 (39.709083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/70072.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /usr/share/ca-certificates/70072.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/70072.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (39.67825ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-908000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-908000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (32.948042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-908000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-908000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.50425ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-908000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-908000 -n functional-908000: exit status 7 (28.633917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-908000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo systemctl is-active crio": exit status 83 (40.491333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0805 10:27:06.968979    7404 out.go:291] Setting OutFile to fd 1 ...
I0805 10:27:06.969262    7404 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:06.969267    7404 out.go:304] Setting ErrFile to fd 2...
I0805 10:27:06.969269    7404 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:06.969414    7404 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
I0805 10:27:06.969645    7404 mustload.go:65] Loading cluster: functional-908000
I0805 10:27:06.969858    7404 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:27:06.974550    7404 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
I0805 10:27:06.978518    7404 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
stdout: * The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7405: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-908000": client config: context "functional-908000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (77.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-908000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-908000 get svc nginx-svc: exit status 1 (68.529708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-908000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-908000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (77.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-908000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-908000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.339917ms)

                                                
                                                
** stderr ** 
	error: context "functional-908000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-908000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service list: exit status 83 (41.034917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-908000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service list -o json: exit status 83 (41.830417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-908000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service --namespace=default --https --url hello-node: exit status 83 (41.700792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-908000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service hello-node --url --format={{.IP}}: exit status 83 (40.96125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-908000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 service hello-node --url: exit status 83 (41.763708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-908000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:1565: failed to parse "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"": parse "* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 version -o=json --components: exit status 83 (40.776416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-908000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-908000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format short --alsologtostderr:
I0805 10:27:49.817057    7677 out.go:291] Setting OutFile to fd 1 ...
I0805 10:27:49.817197    7677 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:49.817201    7677 out.go:304] Setting ErrFile to fd 2...
I0805 10:27:49.817204    7677 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:49.817328    7677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
I0805 10:27:49.817746    7677 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:27:49.817817    7677 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format table --alsologtostderr:
I0805 10:27:50.034877    7691 out.go:291] Setting OutFile to fd 1 ...
I0805 10:27:50.035007    7691 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:50.035010    7691 out.go:304] Setting ErrFile to fd 2...
I0805 10:27:50.035012    7691 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:50.035145    7691 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
I0805 10:27:50.035560    7691 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:27:50.035622    7691 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format json --alsologtostderr:
I0805 10:27:49.999003    7689 out.go:291] Setting OutFile to fd 1 ...
I0805 10:27:49.999168    7689 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:49.999171    7689 out.go:304] Setting ErrFile to fd 2...
I0805 10:27:49.999174    7689 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:49.999305    7689 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
I0805 10:27:49.999728    7689 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:27:49.999794    7689 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-908000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image ls --format yaml --alsologtostderr:
I0805 10:27:49.851349    7679 out.go:291] Setting OutFile to fd 1 ...
I0805 10:27:49.851481    7679 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:49.851484    7679 out.go:304] Setting ErrFile to fd 2...
I0805 10:27:49.851486    7679 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:49.851614    7679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
I0805 10:27:49.852002    7679 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:27:49.852060    7679 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh pgrep buildkitd: exit status 83 (41.897083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image build -t localhost/my-image:functional-908000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-908000 image build -t localhost/my-image:functional-908000 testdata/build --alsologtostderr:
I0805 10:27:49.927285    7685 out.go:291] Setting OutFile to fd 1 ...
I0805 10:27:49.927909    7685 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:49.927914    7685 out.go:304] Setting ErrFile to fd 2...
I0805 10:27:49.927917    7685 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:27:49.928320    7685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
I0805 10:27:49.928721    7685 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:27:49.929168    7685 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:27:49.929423    7685 build_images.go:133] succeeded building to: 
I0805 10:27:49.929427    7685 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "localhost/my-image:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load --daemon docker.io/kicbase/echo-server:functional-908000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load --daemon docker.io/kicbase/echo-server:functional-908000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-908000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load --daemon docker.io/kicbase/echo-server:functional-908000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image save docker.io/kicbase/echo-server:functional-908000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
functional_test.go:442: expected "docker.io/kicbase/echo-server:functional-908000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-908000 docker-env) && out/minikube-darwin-arm64 status -p functional-908000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-908000 docker-env) && out/minikube-darwin-arm64 status -p functional-908000": exit status 1 (46.321042ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2: exit status 83 (41.630625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:27:50.068927    7693 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:27:50.069296    7693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:50.069300    7693 out.go:304] Setting ErrFile to fd 2...
	I0805 10:27:50.069303    7693 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:50.069473    7693 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:27:50.069696    7693 mustload.go:65] Loading cluster: functional-908000
	I0805 10:27:50.069878    7693 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:27:50.074313    7693 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
	I0805 10:27:50.078334    7693 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2: exit status 83 (41.552ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:27:50.152848    7697 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:27:50.152995    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:50.152999    7697 out.go:304] Setting ErrFile to fd 2...
	I0805 10:27:50.153001    7697 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:50.153127    7697 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:27:50.153355    7697 mustload.go:65] Loading cluster: functional-908000
	I0805 10:27:50.153549    7697 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:27:50.157264    7697 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
	I0805 10:27:50.161340    7697 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2: exit status 83 (40.650833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:27:50.110756    7695 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:27:50.110906    7695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:50.110909    7695 out.go:304] Setting ErrFile to fd 2...
	I0805 10:27:50.110912    7695 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:50.111060    7695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:27:50.111306    7695 mustload.go:65] Loading cluster: functional-908000
	I0805 10:27:50.111498    7695 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:27:50.116337    7695 out.go:177] * The control-plane node functional-908000 host is not running: state=Stopped
	I0805 10:27:50.119270    7695 out.go:177]   To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-908000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-908000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-908000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.023815584s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 14 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (39.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-035000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-035000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.765621958s)

                                                
                                                
-- stdout --
	* [ha-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-035000" primary control-plane node in "ha-035000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-035000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:29:29.471747    7754 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:29:29.471891    7754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:29:29.471894    7754 out.go:304] Setting ErrFile to fd 2...
	I0805 10:29:29.471896    7754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:29:29.472010    7754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:29:29.473060    7754 out.go:298] Setting JSON to false
	I0805 10:29:29.489090    7754 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5339,"bootTime":1722873630,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:29:29.489166    7754 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:29:29.495083    7754 out.go:177] * [ha-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:29:29.502122    7754 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:29:29.502195    7754 notify.go:220] Checking for updates...
	I0805 10:29:29.509032    7754 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:29:29.512041    7754 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:29:29.514945    7754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:29:29.518027    7754 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:29:29.521082    7754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:29:29.522701    7754 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:29:29.527036    7754 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:29:29.533912    7754 start.go:297] selected driver: qemu2
	I0805 10:29:29.533920    7754 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:29:29.533928    7754 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:29:29.536125    7754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:29:29.538981    7754 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:29:29.542133    7754 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:29:29.542161    7754 cni.go:84] Creating CNI manager for ""
	I0805 10:29:29.542167    7754 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 10:29:29.542176    7754 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 10:29:29.542202    7754 start.go:340] cluster config:
	{Name:ha-035000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-035000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:29:29.545942    7754 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:29:29.553971    7754 out.go:177] * Starting "ha-035000" primary control-plane node in "ha-035000" cluster
	I0805 10:29:29.558045    7754 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:29:29.558059    7754 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:29:29.558069    7754 cache.go:56] Caching tarball of preloaded images
	I0805 10:29:29.558125    7754 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:29:29.558130    7754 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:29:29.558318    7754 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/ha-035000/config.json ...
	I0805 10:29:29.558329    7754 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/ha-035000/config.json: {Name:mk75e9c99c79996c43e2def67f90789fc119521f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:29:29.558741    7754 start.go:360] acquireMachinesLock for ha-035000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:29:29.558774    7754 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "ha-035000"
	I0805 10:29:29.558786    7754 start.go:93] Provisioning new machine with config: &{Name:ha-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-035000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:29:29.558812    7754 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:29:29.567017    7754 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:29:29.584386    7754 start.go:159] libmachine.API.Create for "ha-035000" (driver="qemu2")
	I0805 10:29:29.584417    7754 client.go:168] LocalClient.Create starting
	I0805 10:29:29.584490    7754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:29:29.584520    7754 main.go:141] libmachine: Decoding PEM data...
	I0805 10:29:29.584529    7754 main.go:141] libmachine: Parsing certificate...
	I0805 10:29:29.584564    7754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:29:29.584591    7754 main.go:141] libmachine: Decoding PEM data...
	I0805 10:29:29.584600    7754 main.go:141] libmachine: Parsing certificate...
	I0805 10:29:29.585082    7754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:29:29.735909    7754 main.go:141] libmachine: Creating SSH key...
	I0805 10:29:29.766432    7754 main.go:141] libmachine: Creating Disk image...
	I0805 10:29:29.766436    7754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:29:29.766604    7754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2
	I0805 10:29:29.775713    7754 main.go:141] libmachine: STDOUT: 
	I0805 10:29:29.775729    7754 main.go:141] libmachine: STDERR: 
	I0805 10:29:29.775771    7754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2 +20000M
	I0805 10:29:29.783560    7754 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:29:29.783574    7754 main.go:141] libmachine: STDERR: 
	I0805 10:29:29.783587    7754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2
	I0805 10:29:29.783591    7754 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:29:29.783601    7754 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:29:29.783639    7754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:45:72:07:6c:bf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2
	I0805 10:29:29.785298    7754 main.go:141] libmachine: STDOUT: 
	I0805 10:29:29.785313    7754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:29:29.785330    7754 client.go:171] duration metric: took 200.912959ms to LocalClient.Create
	I0805 10:29:31.787476    7754 start.go:128] duration metric: took 2.22867725s to createHost
	I0805 10:29:31.787540    7754 start.go:83] releasing machines lock for "ha-035000", held for 2.228796833s
	W0805 10:29:31.787590    7754 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:29:31.794837    7754 out.go:177] * Deleting "ha-035000" in qemu2 ...
	W0805 10:29:31.824606    7754 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:29:31.824639    7754 start.go:729] Will try again in 5 seconds ...
	I0805 10:29:36.826778    7754 start.go:360] acquireMachinesLock for ha-035000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:29:36.827398    7754 start.go:364] duration metric: took 464.833µs to acquireMachinesLock for "ha-035000"
	I0805 10:29:36.827540    7754 start.go:93] Provisioning new machine with config: &{Name:ha-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.30.3 ClusterName:ha-035000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:29:36.827820    7754 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:29:36.836507    7754 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:29:36.886416    7754 start.go:159] libmachine.API.Create for "ha-035000" (driver="qemu2")
	I0805 10:29:36.886468    7754 client.go:168] LocalClient.Create starting
	I0805 10:29:36.886599    7754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:29:36.886660    7754 main.go:141] libmachine: Decoding PEM data...
	I0805 10:29:36.886675    7754 main.go:141] libmachine: Parsing certificate...
	I0805 10:29:36.886739    7754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:29:36.886782    7754 main.go:141] libmachine: Decoding PEM data...
	I0805 10:29:36.886801    7754 main.go:141] libmachine: Parsing certificate...
	I0805 10:29:36.887340    7754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:29:37.047119    7754 main.go:141] libmachine: Creating SSH key...
	I0805 10:29:37.141783    7754 main.go:141] libmachine: Creating Disk image...
	I0805 10:29:37.141788    7754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:29:37.141966    7754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2
	I0805 10:29:37.151413    7754 main.go:141] libmachine: STDOUT: 
	I0805 10:29:37.151436    7754 main.go:141] libmachine: STDERR: 
	I0805 10:29:37.151496    7754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2 +20000M
	I0805 10:29:37.159539    7754 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:29:37.159553    7754 main.go:141] libmachine: STDERR: 
	I0805 10:29:37.159564    7754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2
	I0805 10:29:37.159568    7754 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:29:37.159582    7754 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:29:37.159622    7754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:ab:d1:ea:6f:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2
	I0805 10:29:37.161238    7754 main.go:141] libmachine: STDOUT: 
	I0805 10:29:37.161251    7754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:29:37.161262    7754 client.go:171] duration metric: took 274.792375ms to LocalClient.Create
	I0805 10:29:39.163392    7754 start.go:128] duration metric: took 2.335559958s to createHost
	I0805 10:29:39.163464    7754 start.go:83] releasing machines lock for "ha-035000", held for 2.336077084s
	W0805 10:29:39.163952    7754 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-035000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-035000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:29:39.176601    7754 out.go:177] 
	W0805 10:29:39.180588    7754 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:29:39.180615    7754 out.go:239] * 
	* 
	W0805 10:29:39.183550    7754 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:29:39.194518    7754 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-035000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (68.466917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (81.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.659875ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-035000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- rollout status deployment/busybox: exit status 1 (58.286917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.005833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.966667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.890125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.852167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.890125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.833042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.55ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.026917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.078625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.607333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.78775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.400792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- exec  -- nslookup kubernetes.default: exit status 1 (55.500208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (55.917958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (29.010792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (81.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-035000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.75ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-035000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (31.447291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-035000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-035000 -v=7 --alsologtostderr: exit status 83 (45.862ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-035000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-035000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:01.388406    8119 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:01.388789    8119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.388794    8119 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:01.388796    8119 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.388950    8119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:01.389204    8119 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:01.389390    8119 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:01.394646    8119 out.go:177] * The control-plane node ha-035000 host is not running: state=Stopped
	I0805 10:31:01.399443    8119 out.go:177]   To start a cluster, run: "minikube start -p ha-035000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-035000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (29.493709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-035000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-035000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.221333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-035000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-035000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-035000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (29.189ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-035000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-035000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-035000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-035000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-035000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-035000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-035000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-035000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (29.752416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status --output json -v=7 --alsologtostderr: exit status 7 (29.357666ms)

                                                
                                                
-- stdout --
	{"Name":"ha-035000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:01.595375    8131 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:01.595518    8131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.595521    8131 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:01.595523    8131 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.595659    8131 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:01.595774    8131 out.go:298] Setting JSON to true
	I0805 10:31:01.595782    8131 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:01.595842    8131 notify.go:220] Checking for updates...
	I0805 10:31:01.595973    8131 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:01.595979    8131 status.go:255] checking status of ha-035000 ...
	I0805 10:31:01.596201    8131 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:31:01.596204    8131 status.go:343] host is not running, skipping remaining checks
	I0805 10:31:01.596207    8131 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-035000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (29.0475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 node stop m02 -v=7 --alsologtostderr: exit status 85 (48.93325ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:01.654795    8135 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:01.655348    8135 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.655358    8135 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:01.655362    8135 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.655534    8135 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:01.655771    8135 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:01.655942    8135 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:01.660758    8135 out.go:177] 
	W0805 10:31:01.663892    8135 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0805 10:31:01.663897    8135 out.go:239] * 
	* 
	W0805 10:31:01.665880    8135 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:31:01.670948    8135 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-035000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (29.177917ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:01.703300    8137 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:01.703511    8137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.703514    8137 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:01.703516    8137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.703654    8137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:01.703772    8137 out.go:298] Setting JSON to false
	I0805 10:31:01.703780    8137 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:01.703851    8137 notify.go:220] Checking for updates...
	I0805 10:31:01.704022    8137 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:01.704029    8137 status.go:255] checking status of ha-035000 ...
	I0805 10:31:01.704237    8137 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:31:01.704241    8137 status.go:343] host is not running, skipping remaining checks
	I0805 10:31:01.704243    8137 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr": ha-035000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr": ha-035000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr": ha-035000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr": ha-035000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (30.546708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-035000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-035000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-035000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-035000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (28.974875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 node start m02 -v=7 --alsologtostderr: exit status 85 (47.603875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:01.840321    8146 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:01.840705    8146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.840709    8146 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:01.840711    8146 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.840893    8146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:01.841101    8146 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:01.841290    8146 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:01.845073    8146 out.go:177] 
	W0805 10:31:01.849138    8146 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0805 10:31:01.849147    8146 out.go:239] * 
	* 
	W0805 10:31:01.851285    8146 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:31:01.854997    8146 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0805 10:31:01.840321    8146 out.go:291] Setting OutFile to fd 1 ...
I0805 10:31:01.840705    8146 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:31:01.840709    8146 out.go:304] Setting ErrFile to fd 2...
I0805 10:31:01.840711    8146 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:31:01.840893    8146 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
I0805 10:31:01.841101    8146 mustload.go:65] Loading cluster: ha-035000
I0805 10:31:01.841290    8146 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:31:01.845073    8146 out.go:177] 
W0805 10:31:01.849138    8146 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0805 10:31:01.849147    8146 out.go:239] * 
* 
W0805 10:31:01.851285    8146 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0805 10:31:01.854997    8146 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-035000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (30.040333ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:01.888516    8148 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:01.888639    8148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.888642    8148 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:01.888645    8148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:01.888781    8148 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:01.888892    8148 out.go:298] Setting JSON to false
	I0805 10:31:01.888901    8148 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:01.888955    8148 notify.go:220] Checking for updates...
	I0805 10:31:01.889106    8148 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:01.889112    8148 status.go:255] checking status of ha-035000 ...
	I0805 10:31:01.889330    8148 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:31:01.889334    8148 status.go:343] host is not running, skipping remaining checks
	I0805 10:31:01.889336    8148 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (74.7875ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:03.447717    8150 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:03.447930    8150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:03.447935    8150 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:03.447939    8150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:03.448168    8150 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:03.448333    8150 out.go:298] Setting JSON to false
	I0805 10:31:03.448350    8150 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:03.448394    8150 notify.go:220] Checking for updates...
	I0805 10:31:03.448624    8150 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:03.448631    8150 status.go:255] checking status of ha-035000 ...
	I0805 10:31:03.448899    8150 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:31:03.448904    8150 status.go:343] host is not running, skipping remaining checks
	I0805 10:31:03.448907    8150 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (71.311292ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:04.706410    8152 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:04.706623    8152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:04.706627    8152 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:04.706631    8152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:04.706823    8152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:04.706983    8152 out.go:298] Setting JSON to false
	I0805 10:31:04.706993    8152 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:04.707035    8152 notify.go:220] Checking for updates...
	I0805 10:31:04.707291    8152 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:04.707302    8152 status.go:255] checking status of ha-035000 ...
	I0805 10:31:04.707600    8152 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:31:04.707605    8152 status.go:343] host is not running, skipping remaining checks
	I0805 10:31:04.707608    8152 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (74.444667ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:06.248581    8156 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:06.249109    8156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:06.249116    8156 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:06.249120    8156 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:06.249382    8156 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:06.249584    8156 out.go:298] Setting JSON to false
	I0805 10:31:06.249596    8156 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:06.249710    8156 notify.go:220] Checking for updates...
	I0805 10:31:06.250207    8156 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:06.250224    8156 status.go:255] checking status of ha-035000 ...
	I0805 10:31:06.250495    8156 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:31:06.250501    8156 status.go:343] host is not running, skipping remaining checks
	I0805 10:31:06.250504    8156 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (73.736916ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:11.149242    8159 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:11.149432    8159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:11.149436    8159 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:11.149439    8159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:11.149623    8159 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:11.149776    8159 out.go:298] Setting JSON to false
	I0805 10:31:11.149788    8159 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:11.149833    8159 notify.go:220] Checking for updates...
	I0805 10:31:11.150039    8159 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:11.150047    8159 status.go:255] checking status of ha-035000 ...
	I0805 10:31:11.150327    8159 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:31:11.150332    8159 status.go:343] host is not running, skipping remaining checks
	I0805 10:31:11.150335    8159 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (72.841584ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:17.539335    8164 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:17.539565    8164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:17.539570    8164 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:17.539574    8164 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:17.539760    8164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:17.539924    8164 out.go:298] Setting JSON to false
	I0805 10:31:17.539936    8164 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:17.539984    8164 notify.go:220] Checking for updates...
	I0805 10:31:17.540201    8164 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:17.540208    8164 status.go:255] checking status of ha-035000 ...
	I0805 10:31:17.540497    8164 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:31:17.540502    8164 status.go:343] host is not running, skipping remaining checks
	I0805 10:31:17.540505    8164 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (73.150416ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:28.871057    8174 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:28.871318    8174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:28.871322    8174 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:28.871326    8174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:28.871557    8174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:28.871720    8174 out.go:298] Setting JSON to false
	I0805 10:31:28.871731    8174 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:28.871778    8174 notify.go:220] Checking for updates...
	I0805 10:31:28.871985    8174 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:28.871992    8174 status.go:255] checking status of ha-035000 ...
	I0805 10:31:28.872266    8174 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:31:28.872271    8174 status.go:343] host is not running, skipping remaining checks
	I0805 10:31:28.872274    8174 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (68.974625ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:38.574064    8182 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:38.574277    8182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:38.574282    8182 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:38.574285    8182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:38.574469    8182 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:38.574643    8182 out.go:298] Setting JSON to false
	I0805 10:31:38.574653    8182 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:38.574696    8182 notify.go:220] Checking for updates...
	I0805 10:31:38.574911    8182 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:38.574918    8182 status.go:255] checking status of ha-035000 ...
	I0805 10:31:38.575209    8182 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:31:38.575214    8182 status.go:343] host is not running, skipping remaining checks
	I0805 10:31:38.575217    8182 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (75.346084ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:50.951995    8188 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:50.952214    8188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:50.952220    8188 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:50.952223    8188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:50.952413    8188 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:50.952617    8188 out.go:298] Setting JSON to false
	I0805 10:31:50.952629    8188 mustload.go:65] Loading cluster: ha-035000
	I0805 10:31:50.952677    8188 notify.go:220] Checking for updates...
	I0805 10:31:50.952916    8188 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:50.952924    8188 status.go:255] checking status of ha-035000 ...
	I0805 10:31:50.953231    8188 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:31:50.953236    8188 status.go:343] host is not running, skipping remaining checks
	I0805 10:31:50.953239    8188 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (34.246625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (49.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-035000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-035000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-035000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-035000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-035000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-035000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-035000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-035000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (29.257416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-035000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-035000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-035000 -v=7 --alsologtostderr: (3.639061208s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-035000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-035000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.227312458s)

                                                
                                                
-- stdout --
	* [ha-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-035000" primary control-plane node in "ha-035000" cluster
	* Restarting existing qemu2 VM for "ha-035000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-035000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:31:54.797843    8217 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:31:54.798007    8217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:54.798012    8217 out.go:304] Setting ErrFile to fd 2...
	I0805 10:31:54.798014    8217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:31:54.798195    8217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:31:54.799476    8217 out.go:298] Setting JSON to false
	I0805 10:31:54.818729    8217 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5484,"bootTime":1722873630,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:31:54.818798    8217 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:31:54.823461    8217 out.go:177] * [ha-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:31:54.830421    8217 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:31:54.830483    8217 notify.go:220] Checking for updates...
	I0805 10:31:54.837394    8217 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:31:54.840437    8217 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:31:54.843374    8217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:31:54.846456    8217 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:31:54.849366    8217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:31:54.856460    8217 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:31:54.856521    8217 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:31:54.861380    8217 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:31:54.868416    8217 start.go:297] selected driver: qemu2
	I0805 10:31:54.868423    8217 start.go:901] validating driver "qemu2" against &{Name:ha-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-035000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:31:54.868491    8217 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:31:54.871008    8217 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:31:54.871058    8217 cni.go:84] Creating CNI manager for ""
	I0805 10:31:54.871064    8217 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 10:31:54.871107    8217 start.go:340] cluster config:
	{Name:ha-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-035000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:31:54.875002    8217 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:31:54.882356    8217 out.go:177] * Starting "ha-035000" primary control-plane node in "ha-035000" cluster
	I0805 10:31:54.886411    8217 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:31:54.886435    8217 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:31:54.886447    8217 cache.go:56] Caching tarball of preloaded images
	I0805 10:31:54.886524    8217 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:31:54.886530    8217 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:31:54.886596    8217 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/ha-035000/config.json ...
	I0805 10:31:54.887129    8217 start.go:360] acquireMachinesLock for ha-035000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:31:54.887166    8217 start.go:364] duration metric: took 30.458µs to acquireMachinesLock for "ha-035000"
	I0805 10:31:54.887174    8217 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:31:54.887181    8217 fix.go:54] fixHost starting: 
	I0805 10:31:54.887315    8217 fix.go:112] recreateIfNeeded on ha-035000: state=Stopped err=<nil>
	W0805 10:31:54.887323    8217 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:31:54.896386    8217 out.go:177] * Restarting existing qemu2 VM for "ha-035000" ...
	I0805 10:31:54.900446    8217 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:31:54.900495    8217 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:ab:d1:ea:6f:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2
	I0805 10:31:54.902624    8217 main.go:141] libmachine: STDOUT: 
	I0805 10:31:54.902647    8217 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:31:54.902675    8217 fix.go:56] duration metric: took 15.494208ms for fixHost
	I0805 10:31:54.902681    8217 start.go:83] releasing machines lock for "ha-035000", held for 15.510417ms
	W0805 10:31:54.902687    8217 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:31:54.902715    8217 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:31:54.902720    8217 start.go:729] Will try again in 5 seconds ...
	I0805 10:31:59.904597    8217 start.go:360] acquireMachinesLock for ha-035000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:31:59.905001    8217 start.go:364] duration metric: took 296.375µs to acquireMachinesLock for "ha-035000"
	I0805 10:31:59.905135    8217 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:31:59.905154    8217 fix.go:54] fixHost starting: 
	I0805 10:31:59.905880    8217 fix.go:112] recreateIfNeeded on ha-035000: state=Stopped err=<nil>
	W0805 10:31:59.905911    8217 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:31:59.909312    8217 out.go:177] * Restarting existing qemu2 VM for "ha-035000" ...
	I0805 10:31:59.917333    8217 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:31:59.917528    8217 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:ab:d1:ea:6f:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2
	I0805 10:31:59.926316    8217 main.go:141] libmachine: STDOUT: 
	I0805 10:31:59.926372    8217 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:31:59.926446    8217 fix.go:56] duration metric: took 21.29275ms for fixHost
	I0805 10:31:59.926482    8217 start.go:83] releasing machines lock for "ha-035000", held for 21.458792ms
	W0805 10:31:59.926643    8217 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-035000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-035000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:31:59.934321    8217 out.go:177] 
	W0805 10:31:59.938420    8217 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:31:59.938443    8217 out.go:239] * 
	* 
	W0805 10:31:59.941108    8217 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:31:59.948247    8217 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-035000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-035000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (32.200042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 node delete m03 -v=7 --alsologtostderr: exit status 83 (37.392958ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-035000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-035000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:32:00.090501    8229 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:32:00.091082    8229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:00.091086    8229 out.go:304] Setting ErrFile to fd 2...
	I0805 10:32:00.091088    8229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:00.091232    8229 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:32:00.091441    8229 mustload.go:65] Loading cluster: ha-035000
	I0805 10:32:00.091619    8229 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:32:00.094646    8229 out.go:177] * The control-plane node ha-035000 host is not running: state=Stopped
	I0805 10:32:00.097673    8229 out.go:177]   To start a cluster, run: "minikube start -p ha-035000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-035000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (29.664125ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:32:00.127713    8231 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:32:00.127850    8231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:00.127853    8231 out.go:304] Setting ErrFile to fd 2...
	I0805 10:32:00.127856    8231 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:00.128002    8231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:32:00.128132    8231 out.go:298] Setting JSON to false
	I0805 10:32:00.128141    8231 mustload.go:65] Loading cluster: ha-035000
	I0805 10:32:00.128200    8231 notify.go:220] Checking for updates...
	I0805 10:32:00.128346    8231 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:32:00.128356    8231 status.go:255] checking status of ha-035000 ...
	I0805 10:32:00.128567    8231 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:32:00.128570    8231 status.go:343] host is not running, skipping remaining checks
	I0805 10:32:00.128572    8231 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (28.860958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-035000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-035000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-035000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-035000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (29.156833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (1.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-035000 stop -v=7 --alsologtostderr: (1.846678084s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr: exit status 7 (65.049042ms)

                                                
                                                
-- stdout --
	ha-035000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:32:02.144021    8250 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:32:02.144219    8250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:02.144227    8250 out.go:304] Setting ErrFile to fd 2...
	I0805 10:32:02.144230    8250 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:02.144396    8250 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:32:02.144541    8250 out.go:298] Setting JSON to false
	I0805 10:32:02.144552    8250 mustload.go:65] Loading cluster: ha-035000
	I0805 10:32:02.144595    8250 notify.go:220] Checking for updates...
	I0805 10:32:02.144809    8250 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:32:02.144816    8250 status.go:255] checking status of ha-035000 ...
	I0805 10:32:02.145099    8250 status.go:330] ha-035000 host status = "Stopped" (err=<nil>)
	I0805 10:32:02.145104    8250 status.go:343] host is not running, skipping remaining checks
	I0805 10:32:02.145107    8250 status.go:257] ha-035000 status: &{Name:ha-035000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr": ha-035000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr": ha-035000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-035000 status -v=7 --alsologtostderr": ha-035000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (31.870542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (1.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-035000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-035000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.184068417s)

                                                
                                                
-- stdout --
	* [ha-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-035000" primary control-plane node in "ha-035000" cluster
	* Restarting existing qemu2 VM for "ha-035000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-035000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:32:02.205722    8254 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:32:02.205845    8254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:02.205848    8254 out.go:304] Setting ErrFile to fd 2...
	I0805 10:32:02.205850    8254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:02.205969    8254 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:32:02.206959    8254 out.go:298] Setting JSON to false
	I0805 10:32:02.222881    8254 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5492,"bootTime":1722873630,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:32:02.222951    8254 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:32:02.228036    8254 out.go:177] * [ha-035000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:32:02.235018    8254 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:32:02.235062    8254 notify.go:220] Checking for updates...
	I0805 10:32:02.241955    8254 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:32:02.244929    8254 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:32:02.247928    8254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:32:02.250923    8254 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:32:02.253965    8254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:32:02.257182    8254 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:32:02.257449    8254 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:32:02.261928    8254 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:32:02.268971    8254 start.go:297] selected driver: qemu2
	I0805 10:32:02.268978    8254 start.go:901] validating driver "qemu2" against &{Name:ha-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-035000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:32:02.269031    8254 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:32:02.271079    8254 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:32:02.271107    8254 cni.go:84] Creating CNI manager for ""
	I0805 10:32:02.271114    8254 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 10:32:02.271164    8254 start.go:340] cluster config:
	{Name:ha-035000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-035000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:32:02.274592    8254 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:32:02.281801    8254 out.go:177] * Starting "ha-035000" primary control-plane node in "ha-035000" cluster
	I0805 10:32:02.285925    8254 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:32:02.285944    8254 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:32:02.285958    8254 cache.go:56] Caching tarball of preloaded images
	I0805 10:32:02.286010    8254 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:32:02.286015    8254 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:32:02.286088    8254 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/ha-035000/config.json ...
	I0805 10:32:02.286460    8254 start.go:360] acquireMachinesLock for ha-035000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:32:02.286495    8254 start.go:364] duration metric: took 28.459µs to acquireMachinesLock for "ha-035000"
	I0805 10:32:02.286503    8254 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:32:02.286509    8254 fix.go:54] fixHost starting: 
	I0805 10:32:02.286624    8254 fix.go:112] recreateIfNeeded on ha-035000: state=Stopped err=<nil>
	W0805 10:32:02.286631    8254 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:32:02.294889    8254 out.go:177] * Restarting existing qemu2 VM for "ha-035000" ...
	I0805 10:32:02.298885    8254 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:32:02.298922    8254 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:ab:d1:ea:6f:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2
	I0805 10:32:02.300893    8254 main.go:141] libmachine: STDOUT: 
	I0805 10:32:02.300913    8254 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:32:02.300941    8254 fix.go:56] duration metric: took 14.432584ms for fixHost
	I0805 10:32:02.300945    8254 start.go:83] releasing machines lock for "ha-035000", held for 14.445708ms
	W0805 10:32:02.300952    8254 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:32:02.300993    8254 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:32:02.300997    8254 start.go:729] Will try again in 5 seconds ...
	I0805 10:32:07.303344    8254 start.go:360] acquireMachinesLock for ha-035000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:32:07.303784    8254 start.go:364] duration metric: took 330.75µs to acquireMachinesLock for "ha-035000"
	I0805 10:32:07.303917    8254 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:32:07.303934    8254 fix.go:54] fixHost starting: 
	I0805 10:32:07.304627    8254 fix.go:112] recreateIfNeeded on ha-035000: state=Stopped err=<nil>
	W0805 10:32:07.304656    8254 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:32:07.310279    8254 out.go:177] * Restarting existing qemu2 VM for "ha-035000" ...
	I0805 10:32:07.318147    8254 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:32:07.318356    8254 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:ab:d1:ea:6f:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/ha-035000/disk.qcow2
	I0805 10:32:07.327192    8254 main.go:141] libmachine: STDOUT: 
	I0805 10:32:07.327247    8254 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:32:07.327361    8254 fix.go:56] duration metric: took 23.426458ms for fixHost
	I0805 10:32:07.327375    8254 start.go:83] releasing machines lock for "ha-035000", held for 23.572584ms
	W0805 10:32:07.327538    8254 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-035000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-035000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:32:07.335122    8254 out.go:177] 
	W0805 10:32:07.339176    8254 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:32:07.339201    8254 out.go:239] * 
	* 
	W0805 10:32:07.341884    8254 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:32:07.349161    8254 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-035000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (67.526292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-035000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-035000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-035000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-035000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (29.632708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-035000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-035000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.824292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-035000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-035000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:32:07.537861    8271 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:32:07.538017    8271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:07.538020    8271 out.go:304] Setting ErrFile to fd 2...
	I0805 10:32:07.538022    8271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:07.538145    8271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:32:07.538374    8271 mustload.go:65] Loading cluster: ha-035000
	I0805 10:32:07.538556    8271 config.go:182] Loaded profile config "ha-035000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:32:07.543156    8271 out.go:177] * The control-plane node ha-035000 host is not running: state=Stopped
	I0805 10:32:07.547205    8271 out.go:177]   To start a cluster, run: "minikube start -p ha-035000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-035000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (29.840459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-035000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-035000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-035000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-035000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-035000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-035000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-035000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-035000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-035000 -n ha-035000: exit status 7 (29.720167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-035000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.08s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-069000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-069000 --driver=qemu2 : exit status 80 (9.828567166s)

                                                
                                                
-- stdout --
	* [image-069000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-069000" primary control-plane node in "image-069000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-069000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-069000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-069000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-069000 -n image-069000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-069000 -n image-069000: exit status 7 (67.0195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-069000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.84s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-146000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-146000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.838948041s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"278d4c80-5e66-4868-83d2-ac49f78b0705","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-146000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"878280da-0f23-43e3-96a1-576c60b2d091","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19374"}}
	{"specversion":"1.0","id":"b05dce57-19f5-48a1-813c-9ccea336b845","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig"}}
	{"specversion":"1.0","id":"268f3446-b060-4bfe-98d8-28ff348b72f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c0f55578-f701-476f-b985-0a2bce333f9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"41cafd54-6612-4b1e-b942-b77b37749b35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube"}}
	{"specversion":"1.0","id":"18b24340-0675-4564-b9c1-0aa09aa6e99e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"75a22ab8-e946-458e-a5b8-a259d73628a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"278c7812-5b8f-4590-a0b1-28e8166d4b93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ca4b1fa0-5d8f-4cbb-b7c8-3bf05a8c5e6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-146000\" primary control-plane node in \"json-output-146000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f8438df-02d2-4bb9-ae82-66fb29276e7e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"b2450724-10e7-418c-bb24-93421b25a453","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-146000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"989ec3ca-e67c-465e-af9d-2acea2917da3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"d71fdeb8-f792-4715-b98e-3e1a31d072e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"d916b130-48d7-4313-9fbe-c55b2ff83c59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-146000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"1e20578a-d245-45f1-adb6-f569b73f2fe4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"9472a7c0-fbb5-4a1d-bab6-ddbaa0a49eb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-146000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-146000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-146000 --output=json --user=testUser: exit status 83 (77.007833ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"873a91be-63f8-411b-96f6-7cc915fab4cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-146000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"4b56545e-fb0e-4484-9679-874ef11cd45a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-146000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-146000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-146000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-146000 --output=json --user=testUser: exit status 83 (45.796209ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-146000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-146000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-146000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-146000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-807000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-807000 --driver=qemu2 : exit status 80 (9.916657292s)

                                                
                                                
-- stdout --
	* [first-807000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-807000" primary control-plane node in "first-807000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-807000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-807000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-807000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-05 10:32:41.63095 -0700 PDT m=+432.336578626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-808000 -n second-808000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-808000 -n second-808000: exit status 85 (77.778917ms)

                                                
                                                
-- stdout --
	* Profile "second-808000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-808000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-808000" host is not running, skipping log retrieval (state="* Profile \"second-808000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-808000\"")
helpers_test.go:175: Cleaning up "second-808000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-808000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-05 10:32:41.814801 -0700 PDT m=+432.520431334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-807000 -n first-807000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-807000 -n first-807000: exit status 7 (29.21ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-807000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-807000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-807000
--- FAIL: TestMinikubeProfile (10.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-791000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-791000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.856267875s)

                                                
                                                
-- stdout --
	* [mount-start-1-791000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-791000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-791000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-791000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-791000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-791000 -n mount-start-1-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-791000 -n mount-start-1-791000: exit status 7 (70.806791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-022000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-022000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.856369583s)

                                                
                                                
-- stdout --
	* [multinode-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-022000" primary control-plane node in "multinode-022000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-022000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:32:52.053580    8429 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:32:52.053721    8429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:52.053725    8429 out.go:304] Setting ErrFile to fd 2...
	I0805 10:32:52.053727    8429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:32:52.053841    8429 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:32:52.054872    8429 out.go:298] Setting JSON to false
	I0805 10:32:52.070841    8429 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5542,"bootTime":1722873630,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:32:52.070905    8429 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:32:52.077530    8429 out.go:177] * [multinode-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:32:52.084500    8429 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:32:52.084551    8429 notify.go:220] Checking for updates...
	I0805 10:32:52.091391    8429 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:32:52.094453    8429 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:32:52.097531    8429 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:32:52.100418    8429 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:32:52.103433    8429 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:32:52.106566    8429 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:32:52.110431    8429 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:32:52.117496    8429 start.go:297] selected driver: qemu2
	I0805 10:32:52.117503    8429 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:32:52.117511    8429 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:32:52.119788    8429 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:32:52.123466    8429 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:32:52.126494    8429 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:32:52.126521    8429 cni.go:84] Creating CNI manager for ""
	I0805 10:32:52.126526    8429 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 10:32:52.126530    8429 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 10:32:52.126552    8429 start.go:340] cluster config:
	{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:32:52.130327    8429 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:32:52.137436    8429 out.go:177] * Starting "multinode-022000" primary control-plane node in "multinode-022000" cluster
	I0805 10:32:52.141491    8429 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:32:52.141509    8429 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:32:52.141522    8429 cache.go:56] Caching tarball of preloaded images
	I0805 10:32:52.141590    8429 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:32:52.141604    8429 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:32:52.141819    8429 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/multinode-022000/config.json ...
	I0805 10:32:52.141831    8429 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/multinode-022000/config.json: {Name:mkc8ae5d105b7f624d64308743ce9e0b9e04e948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:32:52.142061    8429 start.go:360] acquireMachinesLock for multinode-022000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:32:52.142097    8429 start.go:364] duration metric: took 29.833µs to acquireMachinesLock for "multinode-022000"
	I0805 10:32:52.142108    8429 start.go:93] Provisioning new machine with config: &{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:32:52.142143    8429 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:32:52.150509    8429 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:32:52.168429    8429 start.go:159] libmachine.API.Create for "multinode-022000" (driver="qemu2")
	I0805 10:32:52.168454    8429 client.go:168] LocalClient.Create starting
	I0805 10:32:52.168513    8429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:32:52.168546    8429 main.go:141] libmachine: Decoding PEM data...
	I0805 10:32:52.168554    8429 main.go:141] libmachine: Parsing certificate...
	I0805 10:32:52.168592    8429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:32:52.168616    8429 main.go:141] libmachine: Decoding PEM data...
	I0805 10:32:52.168625    8429 main.go:141] libmachine: Parsing certificate...
	I0805 10:32:52.169072    8429 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:32:52.320087    8429 main.go:141] libmachine: Creating SSH key...
	I0805 10:32:52.376549    8429 main.go:141] libmachine: Creating Disk image...
	I0805 10:32:52.376554    8429 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:32:52.376736    8429 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2
	I0805 10:32:52.385809    8429 main.go:141] libmachine: STDOUT: 
	I0805 10:32:52.385826    8429 main.go:141] libmachine: STDERR: 
	I0805 10:32:52.385874    8429 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2 +20000M
	I0805 10:32:52.393591    8429 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:32:52.393606    8429 main.go:141] libmachine: STDERR: 
	I0805 10:32:52.393618    8429 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2
	I0805 10:32:52.393622    8429 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:32:52.393635    8429 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:32:52.393662    8429 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:63:6b:c4:1b:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2
	I0805 10:32:52.395165    8429 main.go:141] libmachine: STDOUT: 
	I0805 10:32:52.395181    8429 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:32:52.395198    8429 client.go:171] duration metric: took 226.742125ms to LocalClient.Create
	I0805 10:32:54.397346    8429 start.go:128] duration metric: took 2.25520925s to createHost
	I0805 10:32:54.397413    8429 start.go:83] releasing machines lock for "multinode-022000", held for 2.2553345s
	W0805 10:32:54.397502    8429 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:32:54.408747    8429 out.go:177] * Deleting "multinode-022000" in qemu2 ...
	W0805 10:32:54.438427    8429 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:32:54.438452    8429 start.go:729] Will try again in 5 seconds ...
	I0805 10:32:59.439402    8429 start.go:360] acquireMachinesLock for multinode-022000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:32:59.439902    8429 start.go:364] duration metric: took 406.5µs to acquireMachinesLock for "multinode-022000"
	I0805 10:32:59.440046    8429 start.go:93] Provisioning new machine with config: &{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:32:59.440396    8429 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:32:59.457034    8429 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:32:59.507103    8429 start.go:159] libmachine.API.Create for "multinode-022000" (driver="qemu2")
	I0805 10:32:59.507151    8429 client.go:168] LocalClient.Create starting
	I0805 10:32:59.507275    8429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:32:59.507338    8429 main.go:141] libmachine: Decoding PEM data...
	I0805 10:32:59.507355    8429 main.go:141] libmachine: Parsing certificate...
	I0805 10:32:59.507421    8429 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:32:59.507467    8429 main.go:141] libmachine: Decoding PEM data...
	I0805 10:32:59.507487    8429 main.go:141] libmachine: Parsing certificate...
	I0805 10:32:59.507998    8429 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:32:59.666521    8429 main.go:141] libmachine: Creating SSH key...
	I0805 10:32:59.813403    8429 main.go:141] libmachine: Creating Disk image...
	I0805 10:32:59.813409    8429 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:32:59.813604    8429 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2
	I0805 10:32:59.823206    8429 main.go:141] libmachine: STDOUT: 
	I0805 10:32:59.823224    8429 main.go:141] libmachine: STDERR: 
	I0805 10:32:59.823272    8429 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2 +20000M
	I0805 10:32:59.831196    8429 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:32:59.831209    8429 main.go:141] libmachine: STDERR: 
	I0805 10:32:59.831220    8429 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2
	I0805 10:32:59.831225    8429 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:32:59.831247    8429 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:32:59.831276    8429 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:2a:11:ac:9b:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2
	I0805 10:32:59.832874    8429 main.go:141] libmachine: STDOUT: 
	I0805 10:32:59.832893    8429 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:32:59.832905    8429 client.go:171] duration metric: took 325.753209ms to LocalClient.Create
	I0805 10:33:01.835064    8429 start.go:128] duration metric: took 2.394650167s to createHost
	I0805 10:33:01.835143    8429 start.go:83] releasing machines lock for "multinode-022000", held for 2.39522825s
	W0805 10:33:01.835612    8429 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-022000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:33:01.846162    8429 out.go:177] 
	W0805 10:33:01.856155    8429 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:33:01.856181    8429 out.go:239] * 
	* 
	W0805 10:33:01.858974    8429 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:33:01.867100    8429 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-022000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (66.948917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (101.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.055792ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-022000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- rollout status deployment/busybox: exit status 1 (56.713833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (55.654083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.22075ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.117958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.978916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.863708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.23325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.89975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.930334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.684167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.644458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.286083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (56.26475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.702833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.391459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (56.114833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (29.315708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (101.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-022000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.722667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (30.384958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-022000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-022000 -v 3 --alsologtostderr: exit status 83 (39.337792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-022000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-022000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:34:43.876332    8540 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:34:43.876502    8540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:43.876505    8540 out.go:304] Setting ErrFile to fd 2...
	I0805 10:34:43.876507    8540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:43.876627    8540 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:34:43.876847    8540 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:34:43.877022    8540 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:34:43.880369    8540 out.go:177] * The control-plane node multinode-022000 host is not running: state=Stopped
	I0805 10:34:43.883247    8540 out.go:177]   To start a cluster, run: "minikube start -p multinode-022000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-022000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (29.365083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-022000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-022000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.4675ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-022000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-022000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-022000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (29.477583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-022000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-022000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-022000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-022000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (29.045667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status --output json --alsologtostderr: exit status 7 (29.737458ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-022000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:34:44.077981    8552 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:34:44.078125    8552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:44.078129    8552 out.go:304] Setting ErrFile to fd 2...
	I0805 10:34:44.078131    8552 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:44.078249    8552 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:34:44.078369    8552 out.go:298] Setting JSON to true
	I0805 10:34:44.078380    8552 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:34:44.078428    8552 notify.go:220] Checking for updates...
	I0805 10:34:44.078609    8552 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:34:44.078614    8552 status.go:255] checking status of multinode-022000 ...
	I0805 10:34:44.078842    8552 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:34:44.078845    8552 status.go:343] host is not running, skipping remaining checks
	I0805 10:34:44.078848    8552 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-022000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (29.832041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 node stop m03: exit status 85 (47.263834ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-022000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status: exit status 7 (29.285875ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status --alsologtostderr: exit status 7 (29.895125ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:34:44.215156    8560 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:34:44.215297    8560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:44.215300    8560 out.go:304] Setting ErrFile to fd 2...
	I0805 10:34:44.215303    8560 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:44.215426    8560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:34:44.215534    8560 out.go:298] Setting JSON to false
	I0805 10:34:44.215543    8560 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:34:44.215600    8560 notify.go:220] Checking for updates...
	I0805 10:34:44.215738    8560 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:34:44.215744    8560 status.go:255] checking status of multinode-022000 ...
	I0805 10:34:44.215951    8560 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:34:44.215954    8560 status.go:343] host is not running, skipping remaining checks
	I0805 10:34:44.215956    8560 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-022000 status --alsologtostderr": multinode-022000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (29.764625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (47.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.618459ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:34:44.275289    8564 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:34:44.275679    8564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:44.275683    8564 out.go:304] Setting ErrFile to fd 2...
	I0805 10:34:44.275686    8564 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:44.275820    8564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:34:44.276021    8564 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:34:44.276200    8564 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:34:44.280649    8564 out.go:177] 
	W0805 10:34:44.283682    8564 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0805 10:34:44.283687    8564 out.go:239] * 
	* 
	W0805 10:34:44.285694    8564 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:34:44.289487    8564 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0805 10:34:44.275289    8564 out.go:291] Setting OutFile to fd 1 ...
I0805 10:34:44.275679    8564 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:34:44.275683    8564 out.go:304] Setting ErrFile to fd 2...
I0805 10:34:44.275686    8564 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 10:34:44.275820    8564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
I0805 10:34:44.276021    8564 mustload.go:65] Loading cluster: multinode-022000
I0805 10:34:44.276200    8564 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 10:34:44.280649    8564 out.go:177] 
W0805 10:34:44.283682    8564 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0805 10:34:44.283687    8564 out.go:239] * 
* 
W0805 10:34:44.285694    8564 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0805 10:34:44.289487    8564 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-022000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr: exit status 7 (29.058333ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:34:44.322027    8566 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:34:44.322166    8566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:44.322169    8566 out.go:304] Setting ErrFile to fd 2...
	I0805 10:34:44.322172    8566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:44.322307    8566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:34:44.322428    8566 out.go:298] Setting JSON to false
	I0805 10:34:44.322437    8566 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:34:44.322502    8566 notify.go:220] Checking for updates...
	I0805 10:34:44.322627    8566 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:34:44.322635    8566 status.go:255] checking status of multinode-022000 ...
	I0805 10:34:44.322848    8566 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:34:44.322851    8566 status.go:343] host is not running, skipping remaining checks
	I0805 10:34:44.322854    8566 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr: exit status 7 (73.705166ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:34:45.003144    8568 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:34:45.003354    8568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:45.003359    8568 out.go:304] Setting ErrFile to fd 2...
	I0805 10:34:45.003362    8568 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:45.003546    8568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:34:45.003712    8568 out.go:298] Setting JSON to false
	I0805 10:34:45.003723    8568 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:34:45.003775    8568 notify.go:220] Checking for updates...
	I0805 10:34:45.003986    8568 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:34:45.003996    8568 status.go:255] checking status of multinode-022000 ...
	I0805 10:34:45.004279    8568 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:34:45.004284    8568 status.go:343] host is not running, skipping remaining checks
	I0805 10:34:45.004287    8568 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr: exit status 7 (73.8855ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:34:46.254603    8570 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:34:46.254829    8570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:46.254833    8570 out.go:304] Setting ErrFile to fd 2...
	I0805 10:34:46.254836    8570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:46.255015    8570 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:34:46.255184    8570 out.go:298] Setting JSON to false
	I0805 10:34:46.255195    8570 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:34:46.255241    8570 notify.go:220] Checking for updates...
	I0805 10:34:46.255446    8570 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:34:46.255454    8570 status.go:255] checking status of multinode-022000 ...
	I0805 10:34:46.255744    8570 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:34:46.255749    8570 status.go:343] host is not running, skipping remaining checks
	I0805 10:34:46.255752    8570 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr: exit status 7 (71.98075ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:34:48.496330    8572 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:34:48.496503    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:48.496510    8572 out.go:304] Setting ErrFile to fd 2...
	I0805 10:34:48.496516    8572 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:48.496695    8572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:34:48.496856    8572 out.go:298] Setting JSON to false
	I0805 10:34:48.496870    8572 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:34:48.496917    8572 notify.go:220] Checking for updates...
	I0805 10:34:48.497132    8572 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:34:48.497140    8572 status.go:255] checking status of multinode-022000 ...
	I0805 10:34:48.497411    8572 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:34:48.497416    8572 status.go:343] host is not running, skipping remaining checks
	I0805 10:34:48.497419    8572 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr: exit status 7 (72.18275ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:34:51.980304    8577 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:34:51.980508    8577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:51.980512    8577 out.go:304] Setting ErrFile to fd 2...
	I0805 10:34:51.980515    8577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:51.980678    8577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:34:51.980853    8577 out.go:298] Setting JSON to false
	I0805 10:34:51.980864    8577 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:34:51.980890    8577 notify.go:220] Checking for updates...
	I0805 10:34:51.981114    8577 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:34:51.981122    8577 status.go:255] checking status of multinode-022000 ...
	I0805 10:34:51.981411    8577 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:34:51.981415    8577 status.go:343] host is not running, skipping remaining checks
	I0805 10:34:51.981418    8577 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr: exit status 7 (71.326ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:34:57.342090    8579 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:34:57.342285    8579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:57.342289    8579 out.go:304] Setting ErrFile to fd 2...
	I0805 10:34:57.342292    8579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:34:57.342461    8579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:34:57.342612    8579 out.go:298] Setting JSON to false
	I0805 10:34:57.342630    8579 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:34:57.342673    8579 notify.go:220] Checking for updates...
	I0805 10:34:57.342887    8579 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:34:57.342898    8579 status.go:255] checking status of multinode-022000 ...
	I0805 10:34:57.343186    8579 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:34:57.343191    8579 status.go:343] host is not running, skipping remaining checks
	I0805 10:34:57.343194    8579 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr: exit status 7 (71.497458ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:35:05.650566    8581 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:35:05.650767    8581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:05.650773    8581 out.go:304] Setting ErrFile to fd 2...
	I0805 10:35:05.650776    8581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:05.650977    8581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:35:05.651147    8581 out.go:298] Setting JSON to false
	I0805 10:35:05.651159    8581 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:35:05.651201    8581 notify.go:220] Checking for updates...
	I0805 10:35:05.651459    8581 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:35:05.651468    8581 status.go:255] checking status of multinode-022000 ...
	I0805 10:35:05.651781    8581 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:35:05.651786    8581 status.go:343] host is not running, skipping remaining checks
	I0805 10:35:05.651789    8581 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr: exit status 7 (74.997584ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:35:12.320235    8591 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:35:12.320440    8591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:12.320445    8591 out.go:304] Setting ErrFile to fd 2...
	I0805 10:35:12.320448    8591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:12.320631    8591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:35:12.320800    8591 out.go:298] Setting JSON to false
	I0805 10:35:12.320811    8591 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:35:12.320854    8591 notify.go:220] Checking for updates...
	I0805 10:35:12.321090    8591 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:35:12.321097    8591 status.go:255] checking status of multinode-022000 ...
	I0805 10:35:12.321372    8591 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:35:12.321376    8591 status.go:343] host is not running, skipping remaining checks
	I0805 10:35:12.321380    8591 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr: exit status 7 (72.628166ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:35:31.611036    8597 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:35:31.611236    8597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:31.611240    8597 out.go:304] Setting ErrFile to fd 2...
	I0805 10:35:31.611243    8597 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:31.611421    8597 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:35:31.611580    8597 out.go:298] Setting JSON to false
	I0805 10:35:31.611592    8597 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:35:31.611629    8597 notify.go:220] Checking for updates...
	I0805 10:35:31.611869    8597 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:35:31.611878    8597 status.go:255] checking status of multinode-022000 ...
	I0805 10:35:31.612143    8597 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:35:31.612148    8597 status.go:343] host is not running, skipping remaining checks
	I0805 10:35:31.612151    8597 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-022000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (32.518667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (47.40s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-022000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-022000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-022000: (3.189717875s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-022000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-022000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.219087917s)

                                                
                                                
-- stdout --
	* [multinode-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-022000" primary control-plane node in "multinode-022000" cluster
	* Restarting existing qemu2 VM for "multinode-022000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-022000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:35:34.926772    8623 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:35:34.926931    8623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:34.926936    8623 out.go:304] Setting ErrFile to fd 2...
	I0805 10:35:34.926939    8623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:34.927112    8623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:35:34.928358    8623 out.go:298] Setting JSON to false
	I0805 10:35:34.947543    8623 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5704,"bootTime":1722873630,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:35:34.947619    8623 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:35:34.951422    8623 out.go:177] * [multinode-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:35:34.957378    8623 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:35:34.957434    8623 notify.go:220] Checking for updates...
	I0805 10:35:34.964327    8623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:35:34.967385    8623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:35:34.970343    8623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:35:34.973318    8623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:35:34.976337    8623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:35:34.979568    8623 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:35:34.979622    8623 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:35:34.984285    8623 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:35:34.991267    8623 start.go:297] selected driver: qemu2
	I0805 10:35:34.991274    8623 start.go:901] validating driver "qemu2" against &{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:35:34.991330    8623 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:35:34.993473    8623 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:35:34.993500    8623 cni.go:84] Creating CNI manager for ""
	I0805 10:35:34.993512    8623 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 10:35:34.993558    8623 start.go:340] cluster config:
	{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-022000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:35:34.996986    8623 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:35:35.004165    8623 out.go:177] * Starting "multinode-022000" primary control-plane node in "multinode-022000" cluster
	I0805 10:35:35.008257    8623 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:35:35.008274    8623 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:35:35.008281    8623 cache.go:56] Caching tarball of preloaded images
	I0805 10:35:35.008336    8623 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:35:35.008342    8623 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:35:35.008389    8623 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/multinode-022000/config.json ...
	I0805 10:35:35.008852    8623 start.go:360] acquireMachinesLock for multinode-022000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:35:35.008888    8623 start.go:364] duration metric: took 29.5µs to acquireMachinesLock for "multinode-022000"
	I0805 10:35:35.008897    8623 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:35:35.008902    8623 fix.go:54] fixHost starting: 
	I0805 10:35:35.009023    8623 fix.go:112] recreateIfNeeded on multinode-022000: state=Stopped err=<nil>
	W0805 10:35:35.009031    8623 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:35:35.017273    8623 out.go:177] * Restarting existing qemu2 VM for "multinode-022000" ...
	I0805 10:35:35.021288    8623 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:35:35.021326    8623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:2a:11:ac:9b:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2
	I0805 10:35:35.023356    8623 main.go:141] libmachine: STDOUT: 
	I0805 10:35:35.023376    8623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:35:35.023404    8623 fix.go:56] duration metric: took 14.502417ms for fixHost
	I0805 10:35:35.023408    8623 start.go:83] releasing machines lock for "multinode-022000", held for 14.515834ms
	W0805 10:35:35.023416    8623 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:35:35.023449    8623 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:35:35.023454    8623 start.go:729] Will try again in 5 seconds ...
	I0805 10:35:40.025587    8623 start.go:360] acquireMachinesLock for multinode-022000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:35:40.026083    8623 start.go:364] duration metric: took 336.209µs to acquireMachinesLock for "multinode-022000"
	I0805 10:35:40.026212    8623 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:35:40.026234    8623 fix.go:54] fixHost starting: 
	I0805 10:35:40.026955    8623 fix.go:112] recreateIfNeeded on multinode-022000: state=Stopped err=<nil>
	W0805 10:35:40.026981    8623 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:35:40.035479    8623 out.go:177] * Restarting existing qemu2 VM for "multinode-022000" ...
	I0805 10:35:40.039507    8623 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:35:40.039748    8623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:2a:11:ac:9b:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2
	I0805 10:35:40.048910    8623 main.go:141] libmachine: STDOUT: 
	I0805 10:35:40.048971    8623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:35:40.049054    8623 fix.go:56] duration metric: took 22.824375ms for fixHost
	I0805 10:35:40.049070    8623 start.go:83] releasing machines lock for "multinode-022000", held for 22.964209ms
	W0805 10:35:40.049261    8623 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-022000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-022000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:35:40.056462    8623 out.go:177] 
	W0805 10:35:40.060543    8623 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:35:40.060571    8623 out.go:239] * 
	* 
	W0805 10:35:40.063078    8623 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:35:40.070375    8623 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-022000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-022000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (32.429667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 node delete m03: exit status 83 (40.880625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-022000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-022000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-022000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status --alsologtostderr: exit status 7 (29.363375ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:35:40.257615    8643 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:35:40.257741    8643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:40.257744    8643 out.go:304] Setting ErrFile to fd 2...
	I0805 10:35:40.257746    8643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:40.257888    8643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:35:40.258013    8643 out.go:298] Setting JSON to false
	I0805 10:35:40.258022    8643 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:35:40.258078    8643 notify.go:220] Checking for updates...
	I0805 10:35:40.258213    8643 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:35:40.258218    8643 status.go:255] checking status of multinode-022000 ...
	I0805 10:35:40.258429    8643 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:35:40.258433    8643 status.go:343] host is not running, skipping remaining checks
	I0805 10:35:40.258435    8643 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-022000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (29.888875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-022000 stop: (2.055541375s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status: exit status 7 (67.017834ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-022000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-022000 status --alsologtostderr: exit status 7 (32.172042ms)

                                                
                                                
-- stdout --
	multinode-022000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:35:42.442912    8661 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:35:42.443047    8661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:42.443051    8661 out.go:304] Setting ErrFile to fd 2...
	I0805 10:35:42.443053    8661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:42.443166    8661 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:35:42.443277    8661 out.go:298] Setting JSON to false
	I0805 10:35:42.443285    8661 mustload.go:65] Loading cluster: multinode-022000
	I0805 10:35:42.443358    8661 notify.go:220] Checking for updates...
	I0805 10:35:42.443470    8661 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:35:42.443476    8661 status.go:255] checking status of multinode-022000 ...
	I0805 10:35:42.443673    8661 status.go:330] multinode-022000 host status = "Stopped" (err=<nil>)
	I0805 10:35:42.443676    8661 status.go:343] host is not running, skipping remaining checks
	I0805 10:35:42.443679    8661 status.go:257] multinode-022000 status: &{Name:multinode-022000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-022000 status --alsologtostderr": multinode-022000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-022000 status --alsologtostderr": multinode-022000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (29.622417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-022000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-022000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.176867167s)

                                                
                                                
-- stdout --
	* [multinode-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-022000" primary control-plane node in "multinode-022000" cluster
	* Restarting existing qemu2 VM for "multinode-022000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-022000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:35:42.501567    8665 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:35:42.501690    8665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:42.501698    8665 out.go:304] Setting ErrFile to fd 2...
	I0805 10:35:42.501701    8665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:35:42.501826    8665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:35:42.502817    8665 out.go:298] Setting JSON to false
	I0805 10:35:42.519073    8665 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5712,"bootTime":1722873630,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:35:42.519134    8665 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:35:42.523302    8665 out.go:177] * [multinode-022000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:35:42.530122    8665 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:35:42.530192    8665 notify.go:220] Checking for updates...
	I0805 10:35:42.535408    8665 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:35:42.538080    8665 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:35:42.541117    8665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:35:42.544150    8665 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:35:42.547205    8665 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:35:42.550432    8665 config.go:182] Loaded profile config "multinode-022000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:35:42.550694    8665 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:35:42.555066    8665 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:35:42.562118    8665 start.go:297] selected driver: qemu2
	I0805 10:35:42.562127    8665 start.go:901] validating driver "qemu2" against &{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-022000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:35:42.562216    8665 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:35:42.564542    8665 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:35:42.564565    8665 cni.go:84] Creating CNI manager for ""
	I0805 10:35:42.564569    8665 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 10:35:42.564617    8665 start.go:340] cluster config:
	{Name:multinode-022000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-022000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:35:42.568001    8665 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:35:42.576085    8665 out.go:177] * Starting "multinode-022000" primary control-plane node in "multinode-022000" cluster
	I0805 10:35:42.580089    8665 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:35:42.580104    8665 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:35:42.580115    8665 cache.go:56] Caching tarball of preloaded images
	I0805 10:35:42.580167    8665 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:35:42.580172    8665 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:35:42.580216    8665 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/multinode-022000/config.json ...
	I0805 10:35:42.580686    8665 start.go:360] acquireMachinesLock for multinode-022000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:35:42.580721    8665 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "multinode-022000"
	I0805 10:35:42.580730    8665 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:35:42.580735    8665 fix.go:54] fixHost starting: 
	I0805 10:35:42.580851    8665 fix.go:112] recreateIfNeeded on multinode-022000: state=Stopped err=<nil>
	W0805 10:35:42.580859    8665 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:35:42.585066    8665 out.go:177] * Restarting existing qemu2 VM for "multinode-022000" ...
	I0805 10:35:42.589117    8665 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:35:42.589157    8665 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:2a:11:ac:9b:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2
	I0805 10:35:42.591242    8665 main.go:141] libmachine: STDOUT: 
	I0805 10:35:42.591260    8665 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:35:42.591287    8665 fix.go:56] duration metric: took 10.55325ms for fixHost
	I0805 10:35:42.591290    8665 start.go:83] releasing machines lock for "multinode-022000", held for 10.564666ms
	W0805 10:35:42.591297    8665 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:35:42.591336    8665 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:35:42.591341    8665 start.go:729] Will try again in 5 seconds ...
	I0805 10:35:47.593400    8665 start.go:360] acquireMachinesLock for multinode-022000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:35:47.593746    8665 start.go:364] duration metric: took 254.333µs to acquireMachinesLock for "multinode-022000"
	I0805 10:35:47.593858    8665 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:35:47.593873    8665 fix.go:54] fixHost starting: 
	I0805 10:35:47.594543    8665 fix.go:112] recreateIfNeeded on multinode-022000: state=Stopped err=<nil>
	W0805 10:35:47.594569    8665 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:35:47.600028    8665 out.go:177] * Restarting existing qemu2 VM for "multinode-022000" ...
	I0805 10:35:47.606907    8665 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:35:47.607091    8665 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:2a:11:ac:9b:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/multinode-022000/disk.qcow2
	I0805 10:35:47.615763    8665 main.go:141] libmachine: STDOUT: 
	I0805 10:35:47.615869    8665 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:35:47.615945    8665 fix.go:56] duration metric: took 22.071333ms for fixHost
	I0805 10:35:47.615961    8665 start.go:83] releasing machines lock for "multinode-022000", held for 22.188625ms
	W0805 10:35:47.616209    8665 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-022000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-022000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:35:47.622912    8665 out.go:177] 
	W0805 10:35:47.627036    8665 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:35:47.627059    8665 out.go:239] * 
	* 
	W0805 10:35:47.629513    8665 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:35:47.637879    8665 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-022000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (67.006583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-022000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-022000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-022000-m01 --driver=qemu2 : exit status 80 (10.051506375s)

                                                
                                                
-- stdout --
	* [multinode-022000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-022000-m01" primary control-plane node in "multinode-022000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-022000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-022000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-022000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-022000-m02 --driver=qemu2 : exit status 80 (10.050225667s)

                                                
                                                
-- stdout --
	* [multinode-022000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-022000-m02" primary control-plane node in "multinode-022000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-022000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-022000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-022000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-022000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-022000: exit status 83 (81.66025ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-022000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-022000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-022000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-022000 -n multinode-022000: exit status 7 (30.07925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-022000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.33s)

                                                
                                    
x
+
TestPreload (9.93s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-889000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-889000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.787966375s)

                                                
                                                
-- stdout --
	* [test-preload-889000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-889000" primary control-plane node in "test-preload-889000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-889000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:36:08.184424    8726 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:36:08.184534    8726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:36:08.184537    8726 out.go:304] Setting ErrFile to fd 2...
	I0805 10:36:08.184540    8726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:36:08.184657    8726 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:36:08.185754    8726 out.go:298] Setting JSON to false
	I0805 10:36:08.201628    8726 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5738,"bootTime":1722873630,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:36:08.201696    8726 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:36:08.206938    8726 out.go:177] * [test-preload-889000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:36:08.213937    8726 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:36:08.213994    8726 notify.go:220] Checking for updates...
	I0805 10:36:08.221890    8726 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:36:08.224931    8726 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:36:08.227886    8726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:36:08.230919    8726 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:36:08.233933    8726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:36:08.235608    8726 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:36:08.235672    8726 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:36:08.239925    8726 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:36:08.246751    8726 start.go:297] selected driver: qemu2
	I0805 10:36:08.246757    8726 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:36:08.246762    8726 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:36:08.248951    8726 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:36:08.251886    8726 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:36:08.254936    8726 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:36:08.254954    8726 cni.go:84] Creating CNI manager for ""
	I0805 10:36:08.254961    8726 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:36:08.254965    8726 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:36:08.254992    8726 start.go:340] cluster config:
	{Name:test-preload-889000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:36:08.258783    8726 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:08.266906    8726 out.go:177] * Starting "test-preload-889000" primary control-plane node in "test-preload-889000" cluster
	I0805 10:36:08.270940    8726 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0805 10:36:08.271036    8726 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/test-preload-889000/config.json ...
	I0805 10:36:08.271054    8726 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/test-preload-889000/config.json: {Name:mk92ed9d69fd73b896ad126d566481e30cd57ef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:36:08.271064    8726 cache.go:107] acquiring lock: {Name:mk3086f9dd8d8218bacb245d9935613b918a9bbd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:08.271073    8726 cache.go:107] acquiring lock: {Name:mk51c9e880791de1866a5f6934617528daccd4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:08.271076    8726 cache.go:107] acquiring lock: {Name:mk18e4beb52f3e994e96838088a7a2b588aef5fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:08.271271    8726 cache.go:107] acquiring lock: {Name:mk534ed88935ca02821c8ef02c30bf10cb4fc2ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:08.271302    8726 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0805 10:36:08.271310    8726 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0805 10:36:08.271340    8726 cache.go:107] acquiring lock: {Name:mkf2026c09def852eaf31a964a81139d36d603d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:08.271340    8726 cache.go:107] acquiring lock: {Name:mkbaff917ae13701f36d58141472a722733ec811 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:08.271360    8726 cache.go:107] acquiring lock: {Name:mka0165ea70290761092b2ca8bb3fd13cc4ae500 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:08.271378    8726 cache.go:107] acquiring lock: {Name:mkc2365f1a26e1a2e172fc9ec583c60e8d529bba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:08.271517    8726 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 10:36:08.271609    8726 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0805 10:36:08.271625    8726 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0805 10:36:08.271662    8726 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:36:08.271668    8726 start.go:360] acquireMachinesLock for test-preload-889000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:36:08.271625    8726 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:36:08.271684    8726 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:36:08.271703    8726 start.go:364] duration metric: took 29.083µs to acquireMachinesLock for "test-preload-889000"
	I0805 10:36:08.271715    8726 start.go:93] Provisioning new machine with config: &{Name:test-preload-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:36:08.271766    8726 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:36:08.279731    8726 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:36:08.283543    8726 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:36:08.283698    8726 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:36:08.284337    8726 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0805 10:36:08.284411    8726 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0805 10:36:08.286594    8726 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 10:36:08.286631    8726 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0805 10:36:08.286655    8726 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:36:08.286679    8726 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0805 10:36:08.298167    8726 start.go:159] libmachine.API.Create for "test-preload-889000" (driver="qemu2")
	I0805 10:36:08.298195    8726 client.go:168] LocalClient.Create starting
	I0805 10:36:08.298322    8726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:36:08.298360    8726 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:08.298369    8726 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:08.298424    8726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:36:08.298448    8726 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:08.298461    8726 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:08.298879    8726 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:36:08.450415    8726 main.go:141] libmachine: Creating SSH key...
	I0805 10:36:08.517938    8726 main.go:141] libmachine: Creating Disk image...
	I0805 10:36:08.517957    8726 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:36:08.518170    8726 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2
	I0805 10:36:08.528077    8726 main.go:141] libmachine: STDOUT: 
	I0805 10:36:08.528100    8726 main.go:141] libmachine: STDERR: 
	I0805 10:36:08.528145    8726 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2 +20000M
	I0805 10:36:08.536865    8726 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:36:08.536884    8726 main.go:141] libmachine: STDERR: 
	I0805 10:36:08.536896    8726 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2
	I0805 10:36:08.536901    8726 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:36:08.536913    8726 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:36:08.536942    8726 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:8e:19:f3:3c:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2
	I0805 10:36:08.538810    8726 main.go:141] libmachine: STDOUT: 
	I0805 10:36:08.538910    8726 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:36:08.538934    8726 client.go:171] duration metric: took 240.737291ms to LocalClient.Create
	I0805 10:36:08.677324    8726 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0805 10:36:08.710316    8726 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 10:36:08.710344    8726 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 10:36:08.722426    8726 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0805 10:36:08.751047    8726 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 10:36:08.763967    8726 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0805 10:36:08.802123    8726 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 10:36:08.807907    8726 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0805 10:36:08.895484    8726 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0805 10:36:08.895539    8726 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 624.201875ms
	I0805 10:36:08.895586    8726 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0805 10:36:09.006426    8726 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 10:36:09.006501    8726 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 10:36:09.376697    8726 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0805 10:36:09.376747    8726 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.105685667s
	I0805 10:36:09.376779    8726 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0805 10:36:10.539225    8726 start.go:128] duration metric: took 2.267454083s to createHost
	I0805 10:36:10.539290    8726 start.go:83] releasing machines lock for "test-preload-889000", held for 2.267607417s
	W0805 10:36:10.539353    8726 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:36:10.549529    8726 out.go:177] * Deleting "test-preload-889000" in qemu2 ...
	W0805 10:36:10.580015    8726 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:36:10.580048    8726 start.go:729] Will try again in 5 seconds ...
	I0805 10:36:10.918859    8726 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0805 10:36:10.918902    8726 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.647634792s
	I0805 10:36:10.918929    8726 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0805 10:36:11.079170    8726 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0805 10:36:11.079217    8726 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.807980416s
	I0805 10:36:11.079265    8726 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0805 10:36:12.318314    8726 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0805 10:36:12.318366    8726 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.047188667s
	I0805 10:36:12.318396    8726 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0805 10:36:12.517256    8726 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0805 10:36:12.517304    8726 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.246296917s
	I0805 10:36:12.517324    8726 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0805 10:36:13.587441    8726 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0805 10:36:13.587505    8726 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.316505166s
	I0805 10:36:13.587532    8726 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0805 10:36:15.582089    8726 start.go:360] acquireMachinesLock for test-preload-889000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:36:15.582518    8726 start.go:364] duration metric: took 347.875µs to acquireMachinesLock for "test-preload-889000"
	I0805 10:36:15.582643    8726 start.go:93] Provisioning new machine with config: &{Name:test-preload-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:36:15.582841    8726 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:36:15.594537    8726 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:36:15.645608    8726 start.go:159] libmachine.API.Create for "test-preload-889000" (driver="qemu2")
	I0805 10:36:15.645808    8726 client.go:168] LocalClient.Create starting
	I0805 10:36:15.645930    8726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:36:15.646000    8726 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:15.646016    8726 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:15.646069    8726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:36:15.646119    8726 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:15.646132    8726 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:15.646673    8726 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:36:15.805545    8726 main.go:141] libmachine: Creating SSH key...
	I0805 10:36:15.875291    8726 main.go:141] libmachine: Creating Disk image...
	I0805 10:36:15.875304    8726 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:36:15.875485    8726 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2
	I0805 10:36:15.884875    8726 main.go:141] libmachine: STDOUT: 
	I0805 10:36:15.884898    8726 main.go:141] libmachine: STDERR: 
	I0805 10:36:15.884943    8726 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2 +20000M
	I0805 10:36:15.892812    8726 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:36:15.892826    8726 main.go:141] libmachine: STDERR: 
	I0805 10:36:15.892835    8726 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2
	I0805 10:36:15.892842    8726 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:36:15.892851    8726 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:36:15.892889    8726 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:0a:79:f2:b6:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/test-preload-889000/disk.qcow2
	I0805 10:36:15.894552    8726 main.go:141] libmachine: STDOUT: 
	I0805 10:36:15.894570    8726 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:36:15.894584    8726 client.go:171] duration metric: took 248.774208ms to LocalClient.Create
	I0805 10:36:16.774044    8726 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0805 10:36:16.774112    8726 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.502875s
	I0805 10:36:16.774156    8726 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0805 10:36:16.774209    8726 cache.go:87] Successfully saved all images to host disk.
	I0805 10:36:17.896851    8726 start.go:128] duration metric: took 2.313984041s to createHost
	I0805 10:36:17.896905    8726 start.go:83] releasing machines lock for "test-preload-889000", held for 2.314383209s
	W0805 10:36:17.897232    8726 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-889000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-889000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:36:17.912887    8726 out.go:177] 
	W0805 10:36:17.917017    8726 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:36:17.917054    8726 out.go:239] * 
	* 
	W0805 10:36:17.918415    8726 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:36:17.930839    8726 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-889000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-05 10:36:17.947984 -0700 PDT m=+648.656473001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-889000 -n test-preload-889000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-889000 -n test-preload-889000: exit status 7 (66.974917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-889000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-889000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-889000
--- FAIL: TestPreload (9.93s)

                                                
                                    
x
+
TestScheduledStopUnix (10s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-990000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-990000 --memory=2048 --driver=qemu2 : exit status 80 (9.8511795s)

                                                
                                                
-- stdout --
	* [scheduled-stop-990000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-990000" primary control-plane node in "scheduled-stop-990000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-990000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-990000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-990000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-990000" primary control-plane node in "scheduled-stop-990000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-990000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-990000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-05 10:36:27.94269 -0700 PDT m=+658.651311959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-990000 -n scheduled-stop-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-990000 -n scheduled-stop-990000: exit status 7 (69.396375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-990000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-990000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-990000
--- FAIL: TestScheduledStopUnix (10.00s)

                                                
                                    
x
+
TestSkaffold (12.84s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3377141778 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3377141778 version: (1.034040791s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-094000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-094000 --memory=2600 --driver=qemu2 : exit status 80 (9.809835375s)

                                                
                                                
-- stdout --
	* [skaffold-094000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-094000" primary control-plane node in "skaffold-094000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-094000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-094000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-094000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-094000" primary control-plane node in "skaffold-094000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-094000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-094000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-05 10:36:40.778848 -0700 PDT m=+671.487641209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-094000 -n skaffold-094000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-094000 -n skaffold-094000: exit status 7 (61.622583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-094000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-094000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-094000
--- FAIL: TestSkaffold (12.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (626.41s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.885570514 start -p running-upgrade-952000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.885570514 start -p running-upgrade-952000 --memory=2200 --vm-driver=qemu2 : (1m10.5033905s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-952000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-952000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.073538333s)

                                                
                                                
-- stdout --
	* [running-upgrade-952000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-952000" primary control-plane node in "running-upgrade-952000" cluster
	* Updating the running qemu2 "running-upgrade-952000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:38:12.339558    9085 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:38:12.339720    9085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:38:12.339723    9085 out.go:304] Setting ErrFile to fd 2...
	I0805 10:38:12.339726    9085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:38:12.339879    9085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:38:12.341444    9085 out.go:298] Setting JSON to false
	I0805 10:38:12.361627    9085 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5862,"bootTime":1722873630,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:38:12.361731    9085 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:38:12.366467    9085 out.go:177] * [running-upgrade-952000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:38:12.373543    9085 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:38:12.373651    9085 notify.go:220] Checking for updates...
	I0805 10:38:12.379444    9085 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:38:12.382436    9085 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:38:12.383768    9085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:38:12.386401    9085 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:38:12.389485    9085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:38:12.392760    9085 config.go:182] Loaded profile config "running-upgrade-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 10:38:12.395413    9085 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 10:38:12.398451    9085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:38:12.402461    9085 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:38:12.409452    9085 start.go:297] selected driver: qemu2
	I0805 10:38:12.409462    9085 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51256 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 10:38:12.409513    9085 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:38:12.412027    9085 cni.go:84] Creating CNI manager for ""
	I0805 10:38:12.412044    9085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:38:12.412076    9085 start.go:340] cluster config:
	{Name:running-upgrade-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51256 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 10:38:12.412136    9085 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:38:12.419381    9085 out.go:177] * Starting "running-upgrade-952000" primary control-plane node in "running-upgrade-952000" cluster
	I0805 10:38:12.423417    9085 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 10:38:12.423432    9085 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0805 10:38:12.423443    9085 cache.go:56] Caching tarball of preloaded images
	I0805 10:38:12.423499    9085 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:38:12.423505    9085 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0805 10:38:12.423550    9085 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/config.json ...
	I0805 10:38:12.423910    9085 start.go:360] acquireMachinesLock for running-upgrade-952000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:38:21.327607    9085 start.go:364] duration metric: took 8.903805959s to acquireMachinesLock for "running-upgrade-952000"
	I0805 10:38:21.327632    9085 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:38:21.327639    9085 fix.go:54] fixHost starting: 
	I0805 10:38:21.328275    9085 fix.go:112] recreateIfNeeded on running-upgrade-952000: state=Running err=<nil>
	W0805 10:38:21.328286    9085 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:38:21.336425    9085 out.go:177] * Updating the running qemu2 "running-upgrade-952000" VM ...
	I0805 10:38:21.340391    9085 machine.go:94] provisionDockerMachine start ...
	I0805 10:38:21.340480    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.340641    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.340645    9085 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 10:38:21.412639    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-952000
	
	I0805 10:38:21.412654    9085 buildroot.go:166] provisioning hostname "running-upgrade-952000"
	I0805 10:38:21.412696    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.412823    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.412829    9085 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-952000 && echo "running-upgrade-952000" | sudo tee /etc/hostname
	I0805 10:38:21.487855    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-952000
	
	I0805 10:38:21.487933    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.488056    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.488066    9085 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-952000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-952000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-952000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 10:38:21.559297    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 10:38:21.559310    9085 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19374-6507/.minikube CaCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19374-6507/.minikube}
	I0805 10:38:21.559320    9085 buildroot.go:174] setting up certificates
	I0805 10:38:21.559328    9085 provision.go:84] configureAuth start
	I0805 10:38:21.559336    9085 provision.go:143] copyHostCerts
	I0805 10:38:21.559402    9085 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem, removing ...
	I0805 10:38:21.559412    9085 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem
	I0805 10:38:21.559535    9085 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem (1082 bytes)
	I0805 10:38:21.559700    9085 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem, removing ...
	I0805 10:38:21.559704    9085 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem
	I0805 10:38:21.559749    9085 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem (1123 bytes)
	I0805 10:38:21.559879    9085 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem, removing ...
	I0805 10:38:21.559884    9085 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem
	I0805 10:38:21.559923    9085 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem (1679 bytes)
	I0805 10:38:21.560019    9085 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-952000 san=[127.0.0.1 localhost minikube running-upgrade-952000]
	I0805 10:38:21.637312    9085 provision.go:177] copyRemoteCerts
	I0805 10:38:21.637347    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 10:38:21.637355    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	I0805 10:38:21.675733    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 10:38:21.682924    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0805 10:38:21.689725    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 10:38:21.696361    9085 provision.go:87] duration metric: took 137.02725ms to configureAuth
	I0805 10:38:21.696370    9085 buildroot.go:189] setting minikube options for container-runtime
	I0805 10:38:21.696495    9085 config.go:182] Loaded profile config "running-upgrade-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 10:38:21.696536    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.696621    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.696625    9085 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 10:38:21.771230    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 10:38:21.771249    9085 buildroot.go:70] root file system type: tmpfs
	I0805 10:38:21.771298    9085 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 10:38:21.771368    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.771501    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.771535    9085 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 10:38:21.859897    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 10:38:21.859958    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.860082    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.860091    9085 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 10:38:21.934998    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 10:38:21.935011    9085 machine.go:97] duration metric: took 594.621417ms to provisionDockerMachine
	I0805 10:38:21.935017    9085 start.go:293] postStartSetup for "running-upgrade-952000" (driver="qemu2")
	I0805 10:38:21.935023    9085 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 10:38:21.935076    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 10:38:21.935085    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	I0805 10:38:21.972983    9085 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 10:38:21.974599    9085 info.go:137] Remote host: Buildroot 2021.02.12
	I0805 10:38:21.974606    9085 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19374-6507/.minikube/addons for local assets ...
	I0805 10:38:21.974691    9085 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19374-6507/.minikube/files for local assets ...
	I0805 10:38:21.974804    9085 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem -> 70072.pem in /etc/ssl/certs
	I0805 10:38:21.974931    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 10:38:21.977674    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem --> /etc/ssl/certs/70072.pem (1708 bytes)
	I0805 10:38:21.984818    9085 start.go:296] duration metric: took 49.797ms for postStartSetup
	I0805 10:38:21.984833    9085 fix.go:56] duration metric: took 657.205666ms for fixHost
	I0805 10:38:21.984871    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.984984    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.984989    9085 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 10:38:22.057097    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722879502.514806643
	
	I0805 10:38:22.057111    9085 fix.go:216] guest clock: 1722879502.514806643
	I0805 10:38:22.057115    9085 fix.go:229] Guest: 2024-08-05 10:38:22.514806643 -0700 PDT Remote: 2024-08-05 10:38:21.984835 -0700 PDT m=+9.669283709 (delta=529.971643ms)
	I0805 10:38:22.057127    9085 fix.go:200] guest clock delta is within tolerance: 529.971643ms
	I0805 10:38:22.057130    9085 start.go:83] releasing machines lock for "running-upgrade-952000", held for 729.516875ms
	I0805 10:38:22.057191    9085 ssh_runner.go:195] Run: cat /version.json
	I0805 10:38:22.057196    9085 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 10:38:22.057201    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	I0805 10:38:22.057212    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	W0805 10:38:22.057886    9085 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51192: connect: connection refused
	I0805 10:38:22.057909    9085 retry.go:31] will retry after 262.220624ms: dial tcp [::1]:51192: connect: connection refused
	W0805 10:38:22.361101    9085 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0805 10:38:22.361168    9085 ssh_runner.go:195] Run: systemctl --version
	I0805 10:38:22.363271    9085 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 10:38:22.365099    9085 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 10:38:22.365126    9085 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0805 10:38:22.368213    9085 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0805 10:38:22.373115    9085 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 10:38:22.373124    9085 start.go:495] detecting cgroup driver to use...
	I0805 10:38:22.373192    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 10:38:22.378245    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0805 10:38:22.381111    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 10:38:22.384303    9085 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 10:38:22.384329    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 10:38:22.387400    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 10:38:22.390557    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 10:38:22.395239    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 10:38:22.398394    9085 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 10:38:22.401225    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 10:38:22.404305    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 10:38:22.407796    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 10:38:22.411098    9085 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 10:38:22.413967    9085 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 10:38:22.416648    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:22.496655    9085 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 10:38:22.502584    9085 start.go:495] detecting cgroup driver to use...
	I0805 10:38:22.502653    9085 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 10:38:22.511006    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 10:38:22.517728    9085 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 10:38:22.526200    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 10:38:22.530576    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 10:38:22.534788    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 10:38:22.540412    9085 ssh_runner.go:195] Run: which cri-dockerd
	I0805 10:38:22.541606    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 10:38:22.544715    9085 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 10:38:22.549428    9085 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 10:38:22.628192    9085 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 10:38:22.706284    9085 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 10:38:22.706340    9085 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 10:38:22.711611    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:22.787528    9085 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 10:38:35.459269    9085 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.67189075s)
	I0805 10:38:35.459350    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 10:38:35.464208    9085 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 10:38:35.472437    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 10:38:35.478070    9085 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 10:38:35.571486    9085 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 10:38:35.636233    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:35.701326    9085 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 10:38:35.707587    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 10:38:35.712354    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:35.776486    9085 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 10:38:35.817691    9085 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 10:38:35.817758    9085 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 10:38:35.819954    9085 start.go:563] Will wait 60s for crictl version
	I0805 10:38:35.820009    9085 ssh_runner.go:195] Run: which crictl
	I0805 10:38:35.821428    9085 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 10:38:35.833460    9085 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0805 10:38:35.833523    9085 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 10:38:35.847218    9085 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 10:38:35.864984    9085 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0805 10:38:35.865050    9085 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0805 10:38:35.866439    9085 kubeadm.go:883] updating cluster {Name:running-upgrade-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51256 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0805 10:38:35.866485    9085 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 10:38:35.866524    9085 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 10:38:35.877333    9085 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 10:38:35.877341    9085 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 10:38:35.877383    9085 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 10:38:35.880512    9085 ssh_runner.go:195] Run: which lz4
	I0805 10:38:35.881945    9085 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0805 10:38:35.883272    9085 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 10:38:35.883283    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0805 10:38:36.841804    9085 docker.go:649] duration metric: took 959.902792ms to copy over tarball
	I0805 10:38:36.841863    9085 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 10:38:38.180777    9085 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.338918625s)
	I0805 10:38:38.180810    9085 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 10:38:38.196691    9085 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 10:38:38.200049    9085 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0805 10:38:38.205000    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:38.268995    9085 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 10:38:39.486144    9085 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.217149s)
	I0805 10:38:39.486254    9085 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 10:38:39.499952    9085 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 10:38:39.499969    9085 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 10:38:39.499973    9085 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 10:38:39.503975    9085 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:39.505389    9085 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:39.507495    9085 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:39.507944    9085 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:39.510404    9085 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:39.510420    9085 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:39.511695    9085 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:39.512088    9085 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:39.513301    9085 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:39.513522    9085 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:39.514614    9085 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:39.514632    9085 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:39.515826    9085 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 10:38:39.515962    9085 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:39.516981    9085 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:39.517718    9085 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 10:38:39.942753    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:39.942753    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:39.947674    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:39.959826    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:39.969438    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:39.974564    9085 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0805 10:38:39.974581    9085 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0805 10:38:39.974594    9085 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:39.974594    9085 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:39.974641    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:39.974641    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:39.975544    9085 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0805 10:38:39.975561    9085 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:39.975590    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0805 10:38:39.977063    9085 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 10:38:39.977295    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:39.978721    9085 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0805 10:38:39.978738    9085 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:39.978774    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:40.001918    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0805 10:38:40.017878    9085 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0805 10:38:40.017905    9085 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:40.017966    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0805 10:38:40.017968    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:40.031005    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0805 10:38:40.031030    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0805 10:38:40.031092    9085 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0805 10:38:40.031106    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0805 10:38:40.031108    9085 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:40.031149    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:40.038053    9085 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0805 10:38:40.038070    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 10:38:40.038072    9085 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0805 10:38:40.038117    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0805 10:38:40.038165    9085 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0805 10:38:40.043506    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 10:38:40.043561    9085 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0805 10:38:40.043573    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0805 10:38:40.043607    9085 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0805 10:38:40.058285    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 10:38:40.058296    9085 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0805 10:38:40.058310    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0805 10:38:40.058394    9085 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0805 10:38:40.062943    9085 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0805 10:38:40.062976    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0805 10:38:40.108802    9085 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0805 10:38:40.108824    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0805 10:38:40.160283    9085 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 10:38:40.160382    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:40.215793    9085 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0805 10:38:40.215814    9085 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0805 10:38:40.215820    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0805 10:38:40.215864    9085 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0805 10:38:40.215879    9085 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:40.215941    9085 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:40.339918    9085 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0805 10:38:40.340001    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 10:38:40.340103    9085 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0805 10:38:40.343369    9085 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0805 10:38:40.343388    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0805 10:38:40.421495    9085 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 10:38:40.421511    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0805 10:38:40.922761    9085 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 10:38:40.922785    9085 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0805 10:38:40.922792    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0805 10:38:41.290006    9085 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0805 10:38:41.290057    9085 cache_images.go:92] duration metric: took 1.790100583s to LoadCachedImages
	W0805 10:38:41.290103    9085 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0805 10:38:41.290111    9085 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0805 10:38:41.290164    9085 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-952000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 10:38:41.290250    9085 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 10:38:41.358858    9085 cni.go:84] Creating CNI manager for ""
	I0805 10:38:41.358875    9085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:38:41.358880    9085 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 10:38:41.358889    9085 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-952000 NodeName:running-upgrade-952000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 10:38:41.358959    9085 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-952000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 10:38:41.359023    9085 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0805 10:38:41.362113    9085 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 10:38:41.362144    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 10:38:41.369503    9085 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0805 10:38:41.378689    9085 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 10:38:41.389111    9085 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0805 10:38:41.403263    9085 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0805 10:38:41.405544    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:41.527140    9085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 10:38:41.536476    9085 certs.go:68] Setting up /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000 for IP: 10.0.2.15
	I0805 10:38:41.536489    9085 certs.go:194] generating shared ca certs ...
	I0805 10:38:41.536502    9085 certs.go:226] acquiring lock for ca certs: {Name:mkd94903be2cadc29e0a5fb0c61367bd1b12d51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:41.536663    9085 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.key
	I0805 10:38:41.536699    9085 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.key
	I0805 10:38:41.536704    9085 certs.go:256] generating profile certs ...
	I0805 10:38:41.536792    9085 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/client.key
	I0805 10:38:41.536811    9085 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key.40e1dc2a
	I0805 10:38:41.536823    9085 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt.40e1dc2a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0805 10:38:41.663164    9085 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt.40e1dc2a ...
	I0805 10:38:41.663179    9085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt.40e1dc2a: {Name:mkc97aa80e1eca14446267d385a711ca3d848970 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:41.663417    9085 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key.40e1dc2a ...
	I0805 10:38:41.663422    9085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key.40e1dc2a: {Name:mkbf77a8c5e9db24027092d24de75eec96aed14a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:41.663550    9085 certs.go:381] copying /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt.40e1dc2a -> /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt
	I0805 10:38:41.663682    9085 certs.go:385] copying /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key.40e1dc2a -> /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key
	I0805 10:38:41.663845    9085 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/proxy-client.key
	I0805 10:38:41.663988    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007.pem (1338 bytes)
	W0805 10:38:41.664024    9085 certs.go:480] ignoring /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007_empty.pem, impossibly tiny 0 bytes
	I0805 10:38:41.664031    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem (1675 bytes)
	I0805 10:38:41.664059    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem (1082 bytes)
	I0805 10:38:41.664078    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem (1123 bytes)
	I0805 10:38:41.664095    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem (1679 bytes)
	I0805 10:38:41.664137    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem (1708 bytes)
	I0805 10:38:41.664498    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 10:38:41.675442    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 10:38:41.682228    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 10:38:41.689674    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 10:38:41.697756    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 10:38:41.705512    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 10:38:41.713467    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 10:38:41.721975    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 10:38:41.747102    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem --> /usr/share/ca-certificates/70072.pem (1708 bytes)
	I0805 10:38:41.754260    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 10:38:41.761332    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007.pem --> /usr/share/ca-certificates/7007.pem (1338 bytes)
	I0805 10:38:41.768229    9085 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 10:38:41.773264    9085 ssh_runner.go:195] Run: openssl version
	I0805 10:38:41.775064    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 10:38:41.778514    9085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:41.779956    9085 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:41.779978    9085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:41.781926    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 10:38:41.784596    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7007.pem && ln -fs /usr/share/ca-certificates/7007.pem /etc/ssl/certs/7007.pem"
	I0805 10:38:41.787865    9085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7007.pem
	I0805 10:38:41.789381    9085 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 17:26 /usr/share/ca-certificates/7007.pem
	I0805 10:38:41.789400    9085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7007.pem
	I0805 10:38:41.791058    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7007.pem /etc/ssl/certs/51391683.0"
	I0805 10:38:41.793831    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70072.pem && ln -fs /usr/share/ca-certificates/70072.pem /etc/ssl/certs/70072.pem"
	I0805 10:38:41.796859    9085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70072.pem
	I0805 10:38:41.798284    9085 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 17:26 /usr/share/ca-certificates/70072.pem
	I0805 10:38:41.798304    9085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70072.pem
	I0805 10:38:41.800175    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70072.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 10:38:41.803360    9085 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 10:38:41.804823    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 10:38:41.806600    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 10:38:41.808242    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 10:38:41.809992    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 10:38:41.812212    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 10:38:41.814045    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 10:38:41.815692    9085 kubeadm.go:392] StartCluster: {Name:running-upgrade-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51256 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 10:38:41.815761    9085 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 10:38:41.825934    9085 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 10:38:41.829099    9085 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 10:38:41.829106    9085 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 10:38:41.829132    9085 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 10:38:41.832407    9085 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:41.832701    9085 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-952000" does not appear in /Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:38:41.832801    9085 kubeconfig.go:62] /Users/jenkins/minikube-integration/19374-6507/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-952000" cluster setting kubeconfig missing "running-upgrade-952000" context setting]
	I0805 10:38:41.833003    9085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/kubeconfig: {Name:mkf52f0a49b2ae63f3d2905c5633513b3086a0af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:41.833408    9085 kapi.go:59] client config for running-upgrade-952000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/client.key", CAFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103aa42e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 10:38:41.833717    9085 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 10:38:41.836542    9085 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-952000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0805 10:38:41.836547    9085 kubeadm.go:1160] stopping kube-system containers ...
	I0805 10:38:41.836585    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 10:38:41.848399    9085 docker.go:483] Stopping containers: [13057f94c0f8 0d601c57878c 2c9aa7466dbd 7c8977dcd66d ff96720b9db2 9da153cbe1d1 2fff138ae5b4 9407b8a7dc24 ab057fb8fb35 17eaa61951a4 0a313728fc22 247cfaee0b9f a3cb59bff14b dce841a2a196 32999ae77620 d71cd5277bf8 7236f9259973 ff2c23088238 380ce6aa9a95 f532748d5913]
	I0805 10:38:41.848470    9085 ssh_runner.go:195] Run: docker stop 13057f94c0f8 0d601c57878c 2c9aa7466dbd 7c8977dcd66d ff96720b9db2 9da153cbe1d1 2fff138ae5b4 9407b8a7dc24 ab057fb8fb35 17eaa61951a4 0a313728fc22 247cfaee0b9f a3cb59bff14b dce841a2a196 32999ae77620 d71cd5277bf8 7236f9259973 ff2c23088238 380ce6aa9a95 f532748d5913
	I0805 10:38:41.977051    9085 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 10:38:42.048629    9085 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 10:38:42.055107    9085 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug  5 17:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug  5 17:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug  5 17:38 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug  5 17:38 /etc/kubernetes/scheduler.conf
	
	I0805 10:38:42.055160    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/admin.conf
	I0805 10:38:42.061518    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:42.061555    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 10:38:42.064586    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/kubelet.conf
	I0805 10:38:42.067636    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:42.067664    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 10:38:42.073478    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/controller-manager.conf
	I0805 10:38:42.077980    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:42.078007    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 10:38:42.081318    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/scheduler.conf
	I0805 10:38:42.084536    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:42.084568    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 10:38:42.090023    9085 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 10:38:42.096201    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:42.139244    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:42.666680    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:42.889162    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:42.917386    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:42.944325    9085 api_server.go:52] waiting for apiserver process to appear ...
	I0805 10:38:42.944398    9085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:43.446464    9085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:43.946098    9085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:44.446452    9085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:44.454822    9085 api_server.go:72] duration metric: took 1.510517333s to wait for apiserver process to appear ...
	I0805 10:38:44.454834    9085 api_server.go:88] waiting for apiserver healthz status ...
	I0805 10:38:44.454843    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:49.456848    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:49.456869    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:54.457039    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:54.457094    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:59.457509    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:59.457530    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:04.457908    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:04.458017    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:09.459155    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:09.459199    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:14.460133    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:14.460189    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:19.461488    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:19.461528    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:24.463021    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:24.463074    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:29.465112    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:29.465159    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:34.466032    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:34.466084    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:39.468363    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:39.468385    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:44.470567    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:44.470819    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:44.485458    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:39:44.485551    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:44.496211    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:39:44.496280    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:44.506037    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:39:44.506101    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:44.516583    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:39:44.516656    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:44.526774    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:39:44.526844    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:44.536887    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:39:44.536956    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:44.546765    9085 logs.go:276] 0 containers: []
	W0805 10:39:44.546775    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:44.546828    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:44.556919    9085 logs.go:276] 0 containers: []
	W0805 10:39:44.556930    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:39:44.556938    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:39:44.556943    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:39:44.572860    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:39:44.572875    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:39:44.584584    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:39:44.584600    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:39:44.595660    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:39:44.595672    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:39:44.613183    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:39:44.613196    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:39:44.624553    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:39:44.624562    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:39:44.635805    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:39:44.635815    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:39:44.648288    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:44.648298    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:44.673281    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:44.673288    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:44.771964    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:39:44.771981    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:39:44.786706    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:39:44.786716    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:39:44.801140    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:39:44.801149    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:39:44.818222    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:44.818236    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:44.860081    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:39:44.860091    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:44.871515    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:44.871528    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:47.378002    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:52.379743    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:52.379921    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:52.400662    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:39:52.400740    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:52.417344    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:39:52.417408    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:52.428661    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:39:52.428731    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:52.438942    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:39:52.439039    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:52.448925    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:39:52.448982    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:52.459571    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:39:52.459639    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:52.470534    9085 logs.go:276] 0 containers: []
	W0805 10:39:52.470547    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:52.470608    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:52.481836    9085 logs.go:276] 0 containers: []
	W0805 10:39:52.481847    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:39:52.481855    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:39:52.481860    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:39:52.495655    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:39:52.495665    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:39:52.508654    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:39:52.508667    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:39:52.526136    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:39:52.526146    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:39:52.543887    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:52.543898    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:52.570164    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:52.570174    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:52.606754    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:39:52.606767    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:39:52.621347    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:39:52.621357    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:39:52.634010    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:39:52.634024    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:39:52.645930    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:52.645942    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:52.686952    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:52.686961    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:52.691024    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:39:52.691033    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:52.703343    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:39:52.703358    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:39:52.715711    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:39:52.715723    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:39:52.730649    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:39:52.730659    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:39:55.243052    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:00.245339    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:00.245543    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:00.263694    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:00.263784    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:00.277375    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:00.277456    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:00.289079    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:00.289152    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:00.299590    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:00.299659    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:00.310291    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:00.310359    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:00.321683    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:00.321754    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:00.335251    9085 logs.go:276] 0 containers: []
	W0805 10:40:00.335263    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:00.335325    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:00.348994    9085 logs.go:276] 0 containers: []
	W0805 10:40:00.349004    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:00.349012    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:00.349017    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:00.363157    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:00.363166    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:00.389336    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:00.389348    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:00.400324    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:00.400335    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:00.418041    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:00.418054    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:00.459506    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:00.459517    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:00.463870    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:00.463876    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:00.478284    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:00.478300    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:00.494986    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:00.494998    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:00.508978    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:00.508990    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:00.520716    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:00.520731    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:00.556980    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:00.556994    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:00.568366    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:00.568382    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:00.582234    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:00.582245    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:00.593545    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:00.593555    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:03.108131    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:08.110466    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:08.110880    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:08.147858    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:08.147980    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:08.168511    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:08.168612    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:08.182428    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:08.182505    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:08.196223    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:08.196295    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:08.209363    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:08.209429    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:08.219493    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:08.219578    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:08.235748    9085 logs.go:276] 0 containers: []
	W0805 10:40:08.235763    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:08.235822    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:08.245814    9085 logs.go:276] 0 containers: []
	W0805 10:40:08.245824    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:08.245834    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:08.245840    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:08.261579    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:08.261589    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:08.272427    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:08.272437    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:08.283622    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:08.283635    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:08.287811    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:08.287820    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:08.322103    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:08.322115    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:08.334324    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:08.334337    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:08.360396    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:08.360404    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:08.401398    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:08.401405    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:08.417327    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:08.417337    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:08.431763    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:08.431777    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:08.450713    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:08.450723    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:08.462358    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:08.462369    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:08.478273    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:08.478286    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:08.490258    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:08.490271    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:11.004834    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:16.007043    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:16.007281    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:16.027012    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:16.027104    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:16.040984    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:16.041063    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:16.052596    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:16.052669    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:16.063289    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:16.063351    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:16.073900    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:16.073971    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:16.085037    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:16.085114    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:16.096938    9085 logs.go:276] 0 containers: []
	W0805 10:40:16.096949    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:16.097009    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:16.107166    9085 logs.go:276] 0 containers: []
	W0805 10:40:16.107177    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:16.107184    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:16.107189    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:16.119132    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:16.119143    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:16.123463    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:16.123472    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:16.157987    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:16.157999    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:16.172465    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:16.172477    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:16.186116    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:16.186126    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:16.197418    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:16.197428    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:16.211766    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:16.211776    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:16.222710    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:16.222721    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:16.234113    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:16.234124    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:16.274154    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:16.274163    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:16.285023    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:16.285035    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:16.300809    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:16.300818    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:16.312245    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:16.312257    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:16.329149    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:16.329163    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:18.856828    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:23.857821    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:23.858145    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:23.892308    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:23.892454    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:23.913557    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:23.913675    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:23.928069    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:23.928147    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:23.940731    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:23.940817    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:23.951656    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:23.951728    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:23.962753    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:23.962823    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:23.973338    9085 logs.go:276] 0 containers: []
	W0805 10:40:23.973349    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:23.973420    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:23.984104    9085 logs.go:276] 0 containers: []
	W0805 10:40:23.984114    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:23.984120    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:23.984126    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:23.998745    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:23.998759    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:24.012448    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:24.012461    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:24.026945    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:24.026957    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:24.044372    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:24.044385    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:24.069716    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:24.069725    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:24.083947    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:24.083962    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:24.088324    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:24.088330    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:24.125794    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:24.125808    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:24.137986    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:24.137998    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:24.149514    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:24.149527    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:24.161238    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:24.161252    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:24.203501    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:24.203510    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:24.215266    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:24.215279    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:24.226794    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:24.226807    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:26.739858    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:31.742018    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:31.742157    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:31.755942    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:31.756025    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:31.767834    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:31.767901    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:31.781056    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:31.781129    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:31.792257    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:31.792325    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:31.802197    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:31.802275    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:31.812919    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:31.812991    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:31.827283    9085 logs.go:276] 0 containers: []
	W0805 10:40:31.827294    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:31.827353    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:31.837331    9085 logs.go:276] 0 containers: []
	W0805 10:40:31.837343    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:31.837351    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:31.837357    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:31.851140    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:31.851152    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:31.877406    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:31.877418    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:31.889609    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:31.889620    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:31.931964    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:31.931976    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:31.943731    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:31.943743    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:31.961153    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:31.961163    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:31.972929    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:31.972943    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:31.978137    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:31.978145    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:32.013793    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:32.013805    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:32.027730    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:32.027741    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:32.051564    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:32.051575    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:32.062704    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:32.062719    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:32.074123    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:32.074136    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:32.090629    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:32.090640    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:34.603726    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:39.605899    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:39.606192    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:39.633119    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:39.633252    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:39.650611    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:39.650697    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:39.664121    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:39.664201    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:39.676067    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:39.676134    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:39.686647    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:39.686721    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:39.697914    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:39.697984    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:39.708263    9085 logs.go:276] 0 containers: []
	W0805 10:40:39.708276    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:39.708344    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:39.718132    9085 logs.go:276] 0 containers: []
	W0805 10:40:39.718144    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:39.718152    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:39.718158    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:39.732482    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:39.732495    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:39.743631    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:39.743644    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:39.755944    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:39.755954    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:39.767733    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:39.767743    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:39.784884    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:39.784895    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:39.802821    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:39.802833    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:39.818505    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:39.818515    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:39.830125    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:39.830136    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:39.841355    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:39.841365    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:39.867789    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:39.867798    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:39.910122    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:39.910132    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:39.915072    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:39.915078    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:39.930532    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:39.930543    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:39.942072    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:39.942084    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:42.479461    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:47.481655    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:47.481862    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:47.504917    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:47.505058    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:47.522047    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:47.522128    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:47.535945    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:47.536008    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:47.547048    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:47.547115    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:47.561977    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:47.562046    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:47.572335    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:47.572410    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:47.583407    9085 logs.go:276] 0 containers: []
	W0805 10:40:47.583420    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:47.583479    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:47.599656    9085 logs.go:276] 0 containers: []
	W0805 10:40:47.599669    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:47.599678    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:47.599683    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:47.611816    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:47.611830    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:47.652970    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:47.652979    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:47.657561    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:47.657570    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:47.674671    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:47.674683    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:47.689615    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:47.689626    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:47.702212    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:47.702225    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:47.714738    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:47.714751    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:47.729428    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:47.729440    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:47.767206    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:47.767217    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:47.778214    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:47.778226    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:47.817884    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:47.817899    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:47.829714    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:47.829729    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:47.848235    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:47.848246    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:47.859457    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:47.859470    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:50.386368    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:55.388649    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:55.389100    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:55.432625    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:55.432776    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:55.453776    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:55.453888    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:55.468955    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:55.469034    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:55.488188    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:55.488255    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:55.498725    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:55.498795    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:55.509613    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:55.509678    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:55.520183    9085 logs.go:276] 0 containers: []
	W0805 10:40:55.520196    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:55.520252    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:55.530510    9085 logs.go:276] 0 containers: []
	W0805 10:40:55.530521    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:55.530529    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:55.530536    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:55.564262    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:55.564275    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:55.580242    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:55.580256    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:55.592448    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:55.592463    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:55.617644    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:55.617653    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:55.629546    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:55.629557    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:55.642000    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:55.642011    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:55.663172    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:55.663187    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:55.675037    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:55.675052    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:55.692294    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:55.692309    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:55.703624    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:55.703634    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:55.707830    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:55.707839    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:55.719884    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:55.719894    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:55.731174    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:55.731186    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:55.772852    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:55.772863    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:58.289182    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:03.291357    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:03.291608    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:03.315125    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:03.315270    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:03.330467    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:03.330548    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:03.347991    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:03.348062    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:03.358488    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:03.358560    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:03.372497    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:03.372559    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:03.382924    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:03.382983    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:03.392782    9085 logs.go:276] 0 containers: []
	W0805 10:41:03.392793    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:03.392844    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:03.403042    9085 logs.go:276] 0 containers: []
	W0805 10:41:03.403055    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:03.403063    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:03.403068    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:03.438822    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:03.438833    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:03.452874    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:03.452884    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:03.464649    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:03.464660    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:03.476372    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:03.476384    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:03.490343    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:03.490353    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:03.502251    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:03.502262    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:03.520538    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:03.520548    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:03.545759    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:03.545772    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:03.588056    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:03.588069    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:03.592577    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:03.592585    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:03.607364    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:03.607375    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:03.618916    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:03.618929    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:03.630901    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:03.630913    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:03.642770    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:03.642781    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:06.156217    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:11.157608    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:11.158043    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:11.194715    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:11.194836    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:11.212928    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:11.213022    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:11.226486    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:11.226562    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:11.237614    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:11.237680    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:11.248363    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:11.248430    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:11.258747    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:11.258815    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:11.269241    9085 logs.go:276] 0 containers: []
	W0805 10:41:11.269252    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:11.269306    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:11.279502    9085 logs.go:276] 0 containers: []
	W0805 10:41:11.279514    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:11.279522    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:11.279528    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:11.292296    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:11.292309    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:11.306917    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:11.306928    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:11.318785    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:11.318796    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:11.330437    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:11.330448    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:11.345470    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:11.345481    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:11.359927    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:11.359938    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:11.371693    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:11.371705    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:11.408226    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:11.408239    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:11.420204    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:11.420215    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:11.444083    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:11.444091    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:11.483698    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:11.483709    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:11.487965    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:11.487973    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:11.499166    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:11.499178    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:11.515078    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:11.515090    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:14.033886    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:19.036022    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:19.036174    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:19.063630    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:19.063719    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:19.081869    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:19.081945    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:19.092771    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:19.092839    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:19.103308    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:19.103383    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:19.119946    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:19.120025    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:19.130959    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:19.131020    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:19.141431    9085 logs.go:276] 0 containers: []
	W0805 10:41:19.141442    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:19.141494    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:19.152669    9085 logs.go:276] 0 containers: []
	W0805 10:41:19.152686    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:19.152694    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:19.152699    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:19.168348    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:19.168358    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:19.179942    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:19.179954    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:19.191113    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:19.191125    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:19.202690    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:19.202700    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:19.236521    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:19.236531    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:19.250212    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:19.250224    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:19.254860    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:19.254870    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:19.269233    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:19.269246    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:19.284722    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:19.284733    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:19.299061    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:19.299071    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:19.310441    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:19.310453    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:19.333827    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:19.333834    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:19.374813    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:19.374826    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:19.386578    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:19.386596    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:21.905421    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:26.907597    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:26.907856    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:26.933785    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:26.933923    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:26.951677    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:26.951768    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:26.964697    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:26.964772    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:26.976096    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:26.976158    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:26.986413    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:26.986483    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:26.999849    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:26.999918    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:27.011123    9085 logs.go:276] 0 containers: []
	W0805 10:41:27.011135    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:27.011190    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:27.020867    9085 logs.go:276] 0 containers: []
	W0805 10:41:27.020880    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:27.020887    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:27.020893    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:27.044786    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:27.044793    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:27.056702    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:27.056712    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:27.093277    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:27.093289    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:27.104317    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:27.104332    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:27.118340    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:27.118350    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:27.130268    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:27.130279    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:27.146394    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:27.146406    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:27.163524    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:27.163533    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:27.205858    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:27.205869    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:27.220701    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:27.220710    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:27.234899    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:27.234910    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:27.246378    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:27.246389    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:27.258018    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:27.258029    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:27.262800    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:27.262807    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:29.778448    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:34.780721    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:34.781284    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:34.820296    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:34.820439    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:34.842244    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:34.842354    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:34.858219    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:34.858300    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:34.870902    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:34.870977    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:34.885243    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:34.885308    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:34.895891    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:34.895959    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:34.906290    9085 logs.go:276] 0 containers: []
	W0805 10:41:34.906303    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:34.906369    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:34.916614    9085 logs.go:276] 0 containers: []
	W0805 10:41:34.916626    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:34.916634    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:34.916640    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:34.941068    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:34.941081    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:34.955213    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:34.955225    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:34.968446    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:34.968457    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:34.982785    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:34.982794    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:34.997677    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:34.997687    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:35.011174    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:35.011185    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:35.015662    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:35.015669    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:35.027197    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:35.027208    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:35.038914    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:35.038925    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:35.055877    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:35.055887    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:35.067275    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:35.067285    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:35.108726    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:35.108735    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:35.149605    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:35.149617    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:35.162174    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:35.162188    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:37.674644    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:42.676487    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:42.676761    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:42.700636    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:42.700757    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:42.716698    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:42.716779    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:42.730014    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:42.730087    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:42.741088    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:42.741166    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:42.751046    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:42.751109    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:42.761798    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:42.761861    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:42.779122    9085 logs.go:276] 0 containers: []
	W0805 10:41:42.779133    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:42.779209    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:42.788975    9085 logs.go:276] 0 containers: []
	W0805 10:41:42.788986    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:42.788994    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:42.788999    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:42.793900    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:42.793906    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:42.808093    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:42.808103    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:42.850142    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:42.850150    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:42.884430    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:42.884441    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:42.903208    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:42.903218    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:42.914477    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:42.914490    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:42.938689    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:42.938697    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:42.950733    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:42.950744    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:42.963494    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:42.963505    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:42.974827    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:42.974837    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:42.992661    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:42.992674    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:43.007012    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:43.007025    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:43.020574    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:43.020585    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:43.033482    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:43.033493    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:45.546628    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:50.548822    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:50.549238    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:50.586665    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:50.586808    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:50.607106    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:50.607210    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:50.622256    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:50.622332    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:50.637906    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:50.637982    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:50.652847    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:50.652918    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:50.663847    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:50.663920    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:50.680010    9085 logs.go:276] 0 containers: []
	W0805 10:41:50.680023    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:50.680083    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:50.690856    9085 logs.go:276] 0 containers: []
	W0805 10:41:50.690870    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:50.690878    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:50.690884    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:50.724285    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:50.724299    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:50.740777    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:50.740789    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:50.752965    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:50.752978    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:50.776623    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:50.776631    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:50.790787    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:50.790803    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:50.795148    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:50.795155    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:50.809745    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:50.809756    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:50.824378    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:50.824388    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:50.836986    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:50.836998    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:50.854677    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:50.854687    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:50.866286    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:50.866298    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:50.909072    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:50.909083    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:50.920504    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:50.920515    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:50.932298    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:50.932311    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:53.447967    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:58.450126    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:58.450305    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:58.463438    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:58.463520    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:58.474811    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:58.474877    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:58.485368    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:58.485437    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:58.499430    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:58.499501    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:58.510279    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:58.510351    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:58.521484    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:58.521553    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:58.532261    9085 logs.go:276] 0 containers: []
	W0805 10:41:58.532276    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:58.532332    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:58.542422    9085 logs.go:276] 0 containers: []
	W0805 10:41:58.542439    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:58.542446    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:58.542452    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:58.553875    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:58.553886    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:58.565448    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:58.565460    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:58.577996    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:58.578012    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:58.590006    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:58.590016    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:58.613435    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:58.613443    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:58.654065    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:58.654080    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:58.673238    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:58.673249    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:58.685391    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:58.685402    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:58.689845    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:58.689853    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:58.701551    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:58.701563    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:58.715629    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:58.715641    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:58.730904    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:58.730915    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:58.754792    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:58.754801    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:58.788312    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:58.788329    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:01.302728    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:06.304196    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:06.304757    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:06.337599    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:42:06.337750    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:06.363456    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:42:06.363539    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:06.377315    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:42:06.377391    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:06.394301    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:42:06.394374    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:06.411286    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:42:06.411360    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:06.421766    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:42:06.421834    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:06.435216    9085 logs.go:276] 0 containers: []
	W0805 10:42:06.435227    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:06.435285    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:06.447657    9085 logs.go:276] 0 containers: []
	W0805 10:42:06.447669    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:42:06.447676    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:06.447747    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:06.490719    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:06.490729    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:06.495083    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:42:06.495092    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:42:06.509805    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:42:06.509821    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:06.521763    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:42:06.521775    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:42:06.535923    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:42:06.535938    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:42:06.549956    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:42:06.549965    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:42:06.561592    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:42:06.561603    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:42:06.573776    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:42:06.573787    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:42:06.593803    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:06.593816    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:06.618567    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:42:06.618575    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:06.633259    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:42:06.633269    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:42:06.644038    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:42:06.644050    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:42:06.655605    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:06.655615    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:06.690364    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:42:06.690376    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:42:09.202031    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:14.204249    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:14.204682    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:14.242114    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:42:14.242266    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:14.262673    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:42:14.262769    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:14.277114    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:42:14.277215    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:14.289247    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:42:14.289315    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:14.300675    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:42:14.300735    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:14.312755    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:42:14.312823    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:14.323690    9085 logs.go:276] 0 containers: []
	W0805 10:42:14.323702    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:14.323750    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:14.334330    9085 logs.go:276] 0 containers: []
	W0805 10:42:14.334342    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:42:14.334349    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:42:14.334356    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:42:14.349249    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:14.349261    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:14.372161    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:42:14.372169    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:14.384866    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:42:14.384878    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:14.397281    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:42:14.397292    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:42:14.411741    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:42:14.411757    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:42:14.423756    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:42:14.423767    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:42:14.435560    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:42:14.435571    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:42:14.447374    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:14.447386    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:14.451837    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:14.451844    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:14.485931    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:42:14.485944    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:42:14.504044    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:42:14.504056    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:42:14.526145    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:42:14.526157    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:42:14.537157    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:14.537170    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:14.579565    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:42:14.579577    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:42:17.094606    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:22.097170    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:22.097452    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:22.125947    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:42:22.126066    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:22.143084    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:42:22.143173    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:22.155898    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:42:22.155977    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:22.167996    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:42:22.168065    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:22.178394    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:42:22.178460    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:22.188881    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:42:22.188950    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:22.199345    9085 logs.go:276] 0 containers: []
	W0805 10:42:22.199358    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:22.199411    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:22.209005    9085 logs.go:276] 0 containers: []
	W0805 10:42:22.209016    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:42:22.209022    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:22.209028    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:22.233623    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:42:22.233631    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:22.245309    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:42:22.245323    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:42:22.256947    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:42:22.256958    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:22.268605    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:42:22.268617    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:42:22.287059    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:42:22.287068    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:42:22.302881    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:22.302895    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:22.307354    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:42:22.307360    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:42:22.321437    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:42:22.321451    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:42:22.333056    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:42:22.333067    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:42:22.345037    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:42:22.345047    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:42:22.363704    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:42:22.363718    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:42:22.377912    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:22.377922    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:22.420613    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:22.420628    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:22.459913    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:42:22.459924    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:42:24.976538    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:29.976937    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:29.977110    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:29.993807    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:42:29.993898    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:30.005794    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:42:30.005857    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:30.021189    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:42:30.021258    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:30.031796    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:42:30.031875    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:30.047559    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:42:30.047632    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:30.057959    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:42:30.058030    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:30.067754    9085 logs.go:276] 0 containers: []
	W0805 10:42:30.067765    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:30.067824    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:30.077617    9085 logs.go:276] 0 containers: []
	W0805 10:42:30.077630    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:42:30.077638    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:30.077644    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:30.111710    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:42:30.111721    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:42:30.126150    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:42:30.126161    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:42:30.138007    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:42:30.138017    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:42:30.156162    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:30.156173    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:30.179298    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:42:30.179306    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:42:30.193329    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:42:30.193339    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:30.205924    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:42:30.205936    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:42:30.220204    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:42:30.220214    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:42:30.231932    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:42:30.231941    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:42:30.243323    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:42:30.243334    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:30.255690    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:30.255705    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:30.260134    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:42:30.260141    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:42:30.271497    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:30.271508    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:30.312798    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:42:30.312807    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:42:32.825916    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:37.828526    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:37.828673    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:37.845304    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:42:37.845377    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:37.856686    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:42:37.856762    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:37.867181    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:42:37.867250    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:37.878098    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:42:37.878173    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:37.889271    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:42:37.889339    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:37.900099    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:42:37.900171    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:37.912440    9085 logs.go:276] 0 containers: []
	W0805 10:42:37.912451    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:37.912502    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:37.922990    9085 logs.go:276] 0 containers: []
	W0805 10:42:37.923001    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:42:37.923011    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:42:37.923016    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:42:37.934615    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:42:37.934627    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:42:37.946785    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:42:37.946799    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:42:37.958879    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:42:37.958891    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:42:37.971411    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:42:37.971423    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:37.983563    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:42:37.983580    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:42:37.999826    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:42:37.999838    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:38.011798    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:42:38.011814    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:42:38.029395    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:38.029412    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:38.053617    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:42:38.053634    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:42:38.067687    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:42:38.067698    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:42:38.086313    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:42:38.086324    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:42:38.099335    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:38.099347    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:38.143935    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:38.143952    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:38.148601    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:38.148610    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:40.688887    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:45.691222    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:45.691301    9085 kubeadm.go:597] duration metric: took 4m3.865442417s to restartPrimaryControlPlane
	W0805 10:42:45.691378    9085 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 10:42:45.691411    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 10:42:46.627737    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 10:42:46.632759    9085 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 10:42:46.635381    9085 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 10:42:46.638044    9085 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 10:42:46.638049    9085 kubeadm.go:157] found existing configuration files:
	
	I0805 10:42:46.638069    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/admin.conf
	I0805 10:42:46.640638    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 10:42:46.640660    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 10:42:46.643101    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/kubelet.conf
	I0805 10:42:46.645594    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 10:42:46.645616    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 10:42:46.648696    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/controller-manager.conf
	I0805 10:42:46.651391    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 10:42:46.651415    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 10:42:46.653879    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/scheduler.conf
	I0805 10:42:46.656886    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 10:42:46.656915    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 10:42:46.659521    9085 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 10:42:46.675178    9085 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 10:42:46.675207    9085 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 10:42:46.733508    9085 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 10:42:46.733573    9085 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 10:42:46.733621    9085 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 10:42:46.788148    9085 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 10:42:46.791287    9085 out.go:204]   - Generating certificates and keys ...
	I0805 10:42:46.791318    9085 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 10:42:46.791355    9085 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 10:42:46.791395    9085 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 10:42:46.791427    9085 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 10:42:46.791462    9085 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 10:42:46.791493    9085 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 10:42:46.791538    9085 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 10:42:46.791584    9085 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 10:42:46.791658    9085 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 10:42:46.791711    9085 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 10:42:46.791735    9085 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 10:42:46.791765    9085 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 10:42:46.950090    9085 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 10:42:47.040298    9085 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 10:42:47.177214    9085 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 10:42:47.399963    9085 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 10:42:47.431002    9085 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 10:42:47.431380    9085 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 10:42:47.431403    9085 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 10:42:47.503719    9085 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 10:42:47.507875    9085 out.go:204]   - Booting up control plane ...
	I0805 10:42:47.507917    9085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 10:42:47.507962    9085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 10:42:47.507991    9085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 10:42:47.508033    9085 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 10:42:47.508127    9085 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 10:42:52.012736    9085 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.505868 seconds
	I0805 10:42:52.012804    9085 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 10:42:52.016469    9085 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 10:42:52.527819    9085 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 10:42:52.527974    9085 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-952000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 10:42:53.031919    9085 kubeadm.go:310] [bootstrap-token] Using token: qtm75q.q9mybrkyko74z444
	I0805 10:42:53.033858    9085 out.go:204]   - Configuring RBAC rules ...
	I0805 10:42:53.033917    9085 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 10:42:53.034046    9085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 10:42:53.040674    9085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 10:42:53.041659    9085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 10:42:53.042633    9085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 10:42:53.043736    9085 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 10:42:53.046840    9085 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 10:42:53.229993    9085 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 10:42:53.435877    9085 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 10:42:53.436369    9085 kubeadm.go:310] 
	I0805 10:42:53.436402    9085 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 10:42:53.436406    9085 kubeadm.go:310] 
	I0805 10:42:53.436448    9085 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 10:42:53.436454    9085 kubeadm.go:310] 
	I0805 10:42:53.436483    9085 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 10:42:53.436528    9085 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 10:42:53.436559    9085 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 10:42:53.436562    9085 kubeadm.go:310] 
	I0805 10:42:53.436593    9085 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 10:42:53.436596    9085 kubeadm.go:310] 
	I0805 10:42:53.436623    9085 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 10:42:53.436629    9085 kubeadm.go:310] 
	I0805 10:42:53.436675    9085 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 10:42:53.436715    9085 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 10:42:53.436755    9085 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 10:42:53.436758    9085 kubeadm.go:310] 
	I0805 10:42:53.436812    9085 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 10:42:53.436857    9085 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 10:42:53.436862    9085 kubeadm.go:310] 
	I0805 10:42:53.436904    9085 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qtm75q.q9mybrkyko74z444 \
	I0805 10:42:53.436955    9085 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11215ef01abcfb912d109f6d89af227ccae4ec1efb0dbe7ad4cd9a56e17c4c25 \
	I0805 10:42:53.436968    9085 kubeadm.go:310] 	--control-plane 
	I0805 10:42:53.436972    9085 kubeadm.go:310] 
	I0805 10:42:53.437031    9085 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 10:42:53.437035    9085 kubeadm.go:310] 
	I0805 10:42:53.437074    9085 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qtm75q.q9mybrkyko74z444 \
	I0805 10:42:53.437130    9085 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11215ef01abcfb912d109f6d89af227ccae4ec1efb0dbe7ad4cd9a56e17c4c25 
	I0805 10:42:53.437199    9085 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 10:42:53.437207    9085 cni.go:84] Creating CNI manager for ""
	I0805 10:42:53.437216    9085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:42:53.444474    9085 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 10:42:53.448630    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 10:42:53.452119    9085 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 10:42:53.456940    9085 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 10:42:53.456985    9085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 10:42:53.457019    9085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-952000 minikube.k8s.io/updated_at=2024_08_05T10_42_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7ab1b4d76a5d87b75cd4b70be3ee81f93304b0ab minikube.k8s.io/name=running-upgrade-952000 minikube.k8s.io/primary=true
	I0805 10:42:53.500565    9085 kubeadm.go:1113] duration metric: took 43.619125ms to wait for elevateKubeSystemPrivileges
	I0805 10:42:53.500586    9085 ops.go:34] apiserver oom_adj: -16
	I0805 10:42:53.500685    9085 kubeadm.go:394] duration metric: took 4m11.688355375s to StartCluster
	I0805 10:42:53.500696    9085 settings.go:142] acquiring lock: {Name:mk1ff1cf525c2989e8f58a78ff9196d0a088a47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:42:53.500774    9085 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:42:53.501191    9085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/kubeconfig: {Name:mkf52f0a49b2ae63f3d2905c5633513b3086a0af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:42:53.501373    9085 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:42:53.501427    9085 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 10:42:53.501469    9085 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-952000"
	I0805 10:42:53.501485    9085 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-952000"
	W0805 10:42:53.501489    9085 addons.go:243] addon storage-provisioner should already be in state true
	I0805 10:42:53.501501    9085 host.go:66] Checking if "running-upgrade-952000" exists ...
	I0805 10:42:53.501514    9085 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-952000"
	I0805 10:42:53.501532    9085 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-952000"
	I0805 10:42:53.501575    9085 config.go:182] Loaded profile config "running-upgrade-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 10:42:53.505586    9085 out.go:177] * Verifying Kubernetes components...
	I0805 10:42:53.512652    9085 kapi.go:59] client config for running-upgrade-952000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/client.key", CAFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103aa42e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 10:42:53.512837    9085 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:42:53.512930    9085 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-952000"
	W0805 10:42:53.513034    9085 addons.go:243] addon default-storageclass should already be in state true
	I0805 10:42:53.513049    9085 host.go:66] Checking if "running-upgrade-952000" exists ...
	I0805 10:42:53.514012    9085 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 10:42:53.514021    9085 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 10:42:53.514032    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	I0805 10:42:53.516763    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:42:53.520834    9085 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 10:42:53.520843    9085 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 10:42:53.520851    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	I0805 10:42:53.592237    9085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 10:42:53.597575    9085 api_server.go:52] waiting for apiserver process to appear ...
	I0805 10:42:53.597626    9085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:42:53.602187    9085 api_server.go:72] duration metric: took 100.802041ms to wait for apiserver process to appear ...
	I0805 10:42:53.602195    9085 api_server.go:88] waiting for apiserver healthz status ...
	I0805 10:42:53.602201    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:53.623285    9085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 10:42:53.630685    9085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 10:42:58.604217    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:58.604244    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:03.604364    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:03.604399    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:08.604674    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:08.604702    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:13.605051    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:13.605074    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:18.605475    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:18.605518    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:23.606144    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:23.606213    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 10:43:23.979049    9085 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 10:43:23.987246    9085 out.go:177] * Enabled addons: storage-provisioner
	I0805 10:43:23.994314    9085 addons.go:510] duration metric: took 30.493296333s for enable addons: enabled=[storage-provisioner]
	I0805 10:43:28.607410    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:28.607443    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:33.608582    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:33.608660    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:38.610101    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:38.610140    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:43.611943    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:43.611992    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:48.613479    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:48.613506    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:53.615704    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:53.615864    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:43:53.627269    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:43:53.627340    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:43:53.637553    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:43:53.637626    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:43:53.648356    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:43:53.648427    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:43:53.658638    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:43:53.658706    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:43:53.669190    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:43:53.669260    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:43:53.679645    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:43:53.679712    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:43:53.689952    9085 logs.go:276] 0 containers: []
	W0805 10:43:53.689965    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:43:53.690027    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:43:53.700359    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:43:53.700377    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:43:53.700382    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:43:53.739011    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:43:53.739019    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:43:53.750193    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:43:53.750203    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:43:53.775588    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:43:53.775599    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:43:53.787121    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:43:53.787132    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:43:53.798838    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:43:53.798848    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:43:53.813128    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:43:53.813141    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:43:53.825146    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:43:53.825157    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:43:53.830298    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:43:53.830305    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:43:53.865066    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:43:53.865077    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:43:53.879881    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:43:53.879896    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:43:53.894143    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:43:53.894158    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:43:53.910646    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:43:53.910656    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:43:56.424104    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:01.426340    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:01.426514    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:01.444910    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:01.444995    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:01.458986    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:01.459047    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:01.471010    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:01.471069    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:01.481054    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:01.481112    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:01.495400    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:01.495470    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:01.511408    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:01.511474    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:01.521722    9085 logs.go:276] 0 containers: []
	W0805 10:44:01.521734    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:01.521788    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:01.533062    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:01.533077    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:01.533083    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:01.545035    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:01.545045    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:01.557203    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:01.557215    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:01.594182    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:01.594193    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:01.609502    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:01.609512    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:01.620855    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:01.620865    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:01.636904    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:01.636918    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:01.654329    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:01.654339    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:01.679225    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:01.679233    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:01.691789    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:01.691801    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:01.731122    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:01.731138    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:01.735570    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:01.735576    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:01.753517    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:01.753529    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:04.270887    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:09.273327    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:09.273578    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:09.295348    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:09.295440    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:09.311868    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:09.311947    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:09.323527    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:09.323587    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:09.333883    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:09.333949    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:09.344030    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:09.344106    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:09.359175    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:09.359238    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:09.368715    9085 logs.go:276] 0 containers: []
	W0805 10:44:09.368728    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:09.368786    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:09.379254    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:09.379268    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:09.379275    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:09.390504    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:09.390519    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:09.427251    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:09.427262    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:09.464545    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:09.464556    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:09.479832    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:09.479843    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:09.495441    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:09.495450    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:09.507329    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:09.507339    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:09.521721    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:09.521732    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:09.546555    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:09.546566    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:09.551044    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:09.551051    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:09.563200    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:09.563210    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:09.577955    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:09.577966    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:09.589469    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:09.589480    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:12.107208    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:17.108570    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:17.108758    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:17.131117    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:17.131221    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:17.147343    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:17.147418    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:17.161036    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:17.161118    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:17.172374    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:17.172440    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:17.182751    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:17.182818    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:17.193249    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:17.193320    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:17.203504    9085 logs.go:276] 0 containers: []
	W0805 10:44:17.203516    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:17.203573    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:17.214211    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:17.214230    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:17.214236    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:17.250601    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:17.250610    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:17.262096    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:17.262108    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:17.276379    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:17.276391    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:17.288236    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:17.288248    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:17.299887    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:17.299898    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:17.319551    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:17.319562    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:17.337099    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:17.337109    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:17.360526    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:17.360535    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:17.365130    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:17.365137    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:17.401043    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:17.401060    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:17.415584    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:17.415599    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:17.429317    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:17.429328    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:19.942751    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:24.945037    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:24.945161    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:24.957929    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:24.958013    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:24.968637    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:24.968707    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:24.979269    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:24.979331    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:24.989994    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:24.990061    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:25.000606    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:25.000675    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:25.010992    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:25.011064    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:25.022028    9085 logs.go:276] 0 containers: []
	W0805 10:44:25.022038    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:25.022095    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:25.032639    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:25.032658    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:25.032664    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:25.037696    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:25.037703    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:25.051805    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:25.051818    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:25.062930    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:25.062944    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:25.077937    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:25.077948    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:25.090176    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:25.090187    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:25.113419    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:25.113429    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:25.150278    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:25.150285    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:25.164113    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:25.164124    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:25.175982    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:25.175993    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:25.187085    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:25.187096    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:25.208523    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:25.208534    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:25.220452    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:25.220465    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:27.760097    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:32.762232    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:32.762461    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:32.784011    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:32.784124    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:32.799563    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:32.799638    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:32.812424    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:32.812500    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:32.822652    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:32.822730    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:32.833273    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:32.833340    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:32.843788    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:32.843858    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:32.853318    9085 logs.go:276] 0 containers: []
	W0805 10:44:32.853328    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:32.853379    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:32.863624    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:32.863639    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:32.863645    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:32.878988    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:32.879000    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:32.896280    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:32.896291    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:32.921728    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:32.921738    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:32.960587    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:32.960602    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:32.965310    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:32.965318    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:32.981325    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:32.981337    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:32.993396    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:32.993406    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:33.007880    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:33.007890    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:33.019470    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:33.019484    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:33.030997    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:33.031012    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:33.042600    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:33.042610    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:33.086117    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:33.086131    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:35.602731    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:40.603126    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:40.603556    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:40.647143    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:40.647276    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:40.667935    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:40.668050    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:40.683077    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:40.683157    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:40.695188    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:40.695260    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:40.706225    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:40.706294    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:40.717064    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:40.717132    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:40.727305    9085 logs.go:276] 0 containers: []
	W0805 10:44:40.727316    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:40.727375    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:40.738220    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:40.738235    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:40.738242    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:40.749960    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:40.749971    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:40.788623    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:40.788632    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:40.793629    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:40.793636    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:40.805658    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:40.805671    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:40.817173    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:40.817184    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:40.829647    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:40.829659    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:40.850035    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:40.850045    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:40.884947    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:40.884958    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:40.900101    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:40.900111    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:40.913975    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:40.913986    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:40.928751    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:40.928761    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:40.940595    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:40.940604    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:43.465934    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:48.468285    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:48.468457    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:48.491092    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:48.491175    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:48.506912    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:48.506987    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:48.526535    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:48.526597    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:48.537314    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:48.537375    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:48.547738    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:48.547801    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:48.558084    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:48.558142    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:48.568143    9085 logs.go:276] 0 containers: []
	W0805 10:44:48.568157    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:48.568210    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:48.578791    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:48.578806    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:48.578812    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:48.593919    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:48.593929    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:48.605780    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:48.605790    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:48.629952    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:48.629961    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:48.664732    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:48.664743    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:48.682952    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:48.682963    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:48.697038    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:48.697048    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:48.711625    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:48.711636    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:48.723709    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:48.723718    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:48.740664    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:48.740675    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:48.752078    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:48.752089    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:48.763189    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:48.763199    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:48.800104    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:48.800114    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:51.307048    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:56.309209    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:56.309462    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:56.327106    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:56.327191    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:56.340905    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:56.340982    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:56.353277    9085 logs.go:276] 3 containers: [911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:56.353344    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:56.365926    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:56.366001    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:56.380019    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:56.380088    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:56.390237    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:56.390301    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:56.400169    9085 logs.go:276] 0 containers: []
	W0805 10:44:56.400180    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:56.400238    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:56.418675    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:56.418693    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:56.418698    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:56.430743    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:56.430754    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:56.456054    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:56.456061    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:56.460752    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:44:56.460761    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:44:56.471606    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:56.471619    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:56.483355    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:56.483367    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:56.497260    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:56.497271    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:56.511760    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:56.511772    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:56.524486    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:56.524498    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:56.543907    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:56.543919    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:56.588380    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:56.588394    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:56.603819    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:56.603829    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:56.615253    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:56.615262    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:56.652078    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:56.652086    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:59.165875    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:04.166868    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:04.167138    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:04.199083    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:04.199213    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:04.218431    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:04.218527    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:04.233267    9085 logs.go:276] 3 containers: [911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:04.233348    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:04.245327    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:04.245399    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:04.259714    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:04.259774    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:04.271854    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:04.271915    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:04.285797    9085 logs.go:276] 0 containers: []
	W0805 10:45:04.285811    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:04.285873    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:04.296291    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:04.296309    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:04.296314    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:04.311175    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:04.311187    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:04.322636    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:04.322646    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:04.338266    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:04.338277    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:04.354768    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:04.354779    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:04.366189    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:04.366202    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:04.381198    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:04.381209    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:04.392650    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:04.392660    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:04.416913    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:04.416922    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:04.431130    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:04.431140    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:04.468381    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:04.468396    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:04.481149    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:04.481160    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:04.498984    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:04.498994    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:04.537752    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:04.537761    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:07.044185    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:12.046362    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:12.046546    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:12.081974    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:12.082060    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:12.094554    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:12.094631    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:12.105966    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:12.106043    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:12.116811    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:12.116885    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:12.127821    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:12.127889    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:12.138493    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:12.138564    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:12.150426    9085 logs.go:276] 0 containers: []
	W0805 10:45:12.150437    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:12.150497    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:12.160826    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:12.160844    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:12.160849    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:12.195946    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:12.195958    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:12.215764    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:12.215776    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:12.240928    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:12.240938    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:12.245659    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:12.245669    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:12.257476    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:12.257486    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:12.268925    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:12.268936    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:12.280526    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:12.280538    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:12.295104    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:12.295114    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:12.306630    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:12.306642    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:12.324629    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:12.324640    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:12.360989    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:12.360999    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:12.375393    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:12.375404    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:12.386736    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:12.386748    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:12.398147    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:12.398157    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:14.911446    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:19.913682    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:19.913913    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:19.931533    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:19.931625    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:19.945178    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:19.945259    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:19.958584    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:19.958662    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:19.969131    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:19.969192    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:19.979689    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:19.979756    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:19.991427    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:19.991490    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:20.001764    9085 logs.go:276] 0 containers: []
	W0805 10:45:20.001776    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:20.001829    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:20.012482    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:20.012499    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:20.012504    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:20.017409    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:20.017418    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:20.049534    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:20.049547    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:20.061159    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:20.061173    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:20.079061    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:20.079073    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:20.090379    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:20.090393    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:20.113901    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:20.113911    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:20.152414    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:20.152426    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:20.163696    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:20.163706    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:20.175017    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:20.175034    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:20.186859    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:20.186869    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:20.222490    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:20.222509    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:20.234696    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:20.234707    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:20.261001    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:20.261013    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:20.272514    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:20.272525    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:22.792036    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:27.794348    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:27.794559    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:27.814601    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:27.814691    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:27.831367    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:27.831443    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:27.843018    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:27.843089    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:27.852958    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:27.853025    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:27.862990    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:27.863062    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:27.873490    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:27.873550    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:27.883998    9085 logs.go:276] 0 containers: []
	W0805 10:45:27.884010    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:27.884066    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:27.894568    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:27.894584    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:27.894591    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:27.931257    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:27.931267    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:27.951830    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:27.951839    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:27.963972    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:27.963984    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:28.001109    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:28.001123    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:28.019923    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:28.019936    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:28.035515    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:28.035528    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:28.047589    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:28.047600    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:28.062173    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:28.062186    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:28.073849    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:28.073859    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:28.088108    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:28.088119    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:28.099629    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:28.099640    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:28.104590    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:28.104597    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:28.118322    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:28.118331    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:28.136243    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:28.136254    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:30.663926    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:35.666309    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:35.666606    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:35.694396    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:35.694504    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:35.713715    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:35.713795    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:35.727967    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:35.728048    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:35.740035    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:35.740103    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:35.750447    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:35.750512    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:35.760912    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:35.760983    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:35.778196    9085 logs.go:276] 0 containers: []
	W0805 10:45:35.778207    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:35.778264    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:35.789147    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:35.789163    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:35.789168    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:35.793749    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:35.793756    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:35.805484    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:35.805497    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:35.842542    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:35.842554    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:35.857538    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:35.857551    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:35.869513    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:35.869524    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:35.886968    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:35.886980    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:35.897999    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:35.898009    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:35.921740    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:35.921750    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:35.933372    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:35.933385    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:35.945640    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:35.945651    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:35.959866    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:35.959876    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:35.971631    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:35.971644    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:35.986990    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:35.987002    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:36.001673    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:36.001687    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:38.543149    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:43.544610    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:43.544845    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:43.563274    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:43.563369    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:43.579596    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:43.579668    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:43.591039    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:43.591111    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:43.605907    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:43.605981    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:43.615905    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:43.615972    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:43.628467    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:43.628538    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:43.638882    9085 logs.go:276] 0 containers: []
	W0805 10:45:43.638894    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:43.638953    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:43.649558    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:43.649577    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:43.649582    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:43.663862    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:43.663873    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:43.689671    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:43.689678    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:43.701363    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:43.701379    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:43.739711    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:43.739720    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:43.751605    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:43.751617    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:43.765806    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:43.765816    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:43.804361    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:43.804372    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:43.816631    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:43.816641    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:43.828682    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:43.828692    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:43.843629    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:43.843639    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:43.855262    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:43.855272    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:43.868279    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:43.868290    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:43.872753    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:43.872762    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:43.891371    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:43.891385    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:46.408308    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:51.410718    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:51.410884    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:51.426266    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:51.426340    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:51.441928    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:51.441996    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:51.456798    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:51.456877    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:51.469188    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:51.469250    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:51.486189    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:51.486247    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:51.496928    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:51.496988    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:51.507627    9085 logs.go:276] 0 containers: []
	W0805 10:45:51.507639    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:51.507699    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:51.518465    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:51.518483    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:51.518488    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:51.523250    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:51.523257    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:51.559024    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:51.559038    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:51.573582    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:51.573594    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:51.587943    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:51.587956    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:51.599641    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:51.599655    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:51.611262    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:51.611274    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:51.623476    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:51.623488    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:51.638141    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:51.638150    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:51.661800    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:51.661808    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:51.672942    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:51.672953    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:51.684463    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:51.684476    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:51.721572    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:51.721581    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:51.743758    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:51.743769    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:51.755850    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:51.755865    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:54.270961    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:59.272678    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:59.272886    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:59.295112    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:59.295211    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:59.312706    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:59.312788    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:59.325739    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:59.325812    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:59.336450    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:59.336519    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:59.346765    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:59.346832    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:59.356871    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:59.356931    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:59.366946    9085 logs.go:276] 0 containers: []
	W0805 10:45:59.366963    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:59.367022    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:59.381345    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:59.381362    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:59.381367    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:59.395928    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:59.395937    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:59.407897    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:59.407911    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:59.419821    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:59.419834    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:59.431378    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:59.431389    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:59.455476    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:59.455484    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:59.460047    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:59.460054    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:59.471563    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:59.471575    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:59.486773    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:59.486783    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:59.507580    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:59.507592    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:59.546744    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:59.546756    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:59.560840    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:59.560853    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:59.572497    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:59.572508    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:59.584553    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:59.584565    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:59.602209    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:59.602218    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:02.138536    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:07.139658    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:07.139846    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:07.164648    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:07.164767    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:07.183462    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:07.183532    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:07.196416    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:46:07.196494    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:07.207882    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:07.207951    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:07.218361    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:07.218432    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:07.229011    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:07.229083    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:07.240426    9085 logs.go:276] 0 containers: []
	W0805 10:46:07.240437    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:07.240496    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:07.252123    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:07.252144    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:07.252149    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:07.290424    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:07.290452    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:07.303251    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:07.303265    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:07.315835    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:46:07.315845    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:46:07.327342    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:07.327353    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:07.332270    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:07.332279    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:07.369305    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:07.369321    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:07.381600    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:07.381612    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:07.399631    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:07.399642    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:07.414055    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:07.414068    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:07.425964    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:07.425975    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:07.439764    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:07.439778    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:07.453249    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:07.453259    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:07.475227    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:07.475237    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:07.501237    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:07.501245    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:10.014273    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:15.016725    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:15.017107    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:15.057180    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:15.057325    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:15.080079    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:15.080174    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:15.095583    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:46:15.095665    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:15.107646    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:15.107715    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:15.118282    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:15.118353    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:15.129396    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:15.129464    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:15.139399    9085 logs.go:276] 0 containers: []
	W0805 10:46:15.139411    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:15.139469    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:15.150290    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:15.150307    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:15.150312    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:15.164666    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:15.164676    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:15.176679    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:15.176693    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:15.191826    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:15.191836    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:15.203540    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:15.203553    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:15.215499    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:15.215512    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:15.251038    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:15.251052    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:15.265366    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:15.265380    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:15.283331    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:15.283346    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:15.322184    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:15.322200    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:15.333695    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:15.333709    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:15.337977    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:15.337984    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:15.349633    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:15.349645    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:15.361358    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:46:15.361373    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:46:15.376749    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:15.376763    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:17.903857    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:22.906158    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:22.906380    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:22.934421    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:22.934552    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:22.951602    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:22.951684    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:22.965160    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:46:22.965229    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:22.976773    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:22.976845    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:22.987456    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:22.987551    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:22.998451    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:22.998513    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:23.008266    9085 logs.go:276] 0 containers: []
	W0805 10:46:23.008279    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:23.008336    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:23.019390    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:23.019407    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:23.019412    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:23.037137    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:23.037150    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:23.061970    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:23.061978    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:23.075793    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:23.075807    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:23.088571    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:23.088583    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:23.102323    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:23.102334    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:23.113889    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:23.113902    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:23.126118    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:46:23.126130    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:46:23.137691    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:23.137701    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:23.175507    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:23.175515    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:23.187327    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:23.187339    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:23.201883    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:23.201898    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:23.213279    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:23.213293    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:23.232084    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:23.232098    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:23.236871    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:23.236879    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:25.772373    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:30.774656    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:30.774919    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:30.809021    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:30.809138    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:30.826087    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:30.826172    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:30.839190    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:46:30.839267    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:30.850703    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:30.850771    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:30.860648    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:30.860719    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:30.871328    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:30.871392    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:30.881479    9085 logs.go:276] 0 containers: []
	W0805 10:46:30.881492    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:30.881549    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:30.892045    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:30.892063    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:30.892069    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:30.927362    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:30.927372    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:30.939125    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:30.939136    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:30.956220    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:30.956244    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:30.969082    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:30.969094    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:30.983832    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:30.983841    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:30.995548    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:30.995558    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:31.021131    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:31.021140    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:31.058687    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:31.058694    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:31.062910    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:31.062919    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:31.074955    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:31.074966    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:31.088063    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:31.088075    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:31.102730    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:31.102741    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:31.116213    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:31.116225    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:31.128763    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:46:31.128774    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:46:33.642330    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:38.644580    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:38.644713    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:38.656175    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:38.656248    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:38.666778    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:38.666848    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:38.678383    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:46:38.678455    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:38.689609    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:38.689675    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:38.699947    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:38.700016    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:38.710832    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:38.710904    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:38.720964    9085 logs.go:276] 0 containers: []
	W0805 10:46:38.720977    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:38.721032    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:38.730946    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:38.730966    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:38.730971    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:38.750580    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:46:38.750592    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:46:38.762216    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:38.762228    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:38.773742    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:38.773754    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:38.778365    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:38.778376    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:38.791169    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:38.791179    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:38.802859    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:38.802870    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:38.815996    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:38.816007    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:38.855249    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:38.855259    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:38.891701    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:38.891712    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:38.903498    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:38.903509    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:38.915429    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:38.915440    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:38.929958    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:38.929974    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:38.944123    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:38.944134    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:38.962065    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:38.962078    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:41.489303    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:46.491947    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:46.492062    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:46.504497    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:46.504570    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:46.515235    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:46.515314    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:46.526413    9085 logs.go:276] 4 containers: [ac67f2851614 d08309a6b024 911b32609175 09cf1cd1eb79]
	I0805 10:46:46.526483    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:46.536988    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:46.537049    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:46.550011    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:46.550071    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:46.565711    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:46.565779    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:46.576383    9085 logs.go:276] 0 containers: []
	W0805 10:46:46.576394    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:46.576446    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:46.587936    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:46.587954    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:46.587959    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:46.602178    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:46.602192    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:46.614624    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:46.614637    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:46.649493    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:46.649507    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:46.661745    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:46.661759    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:46.698054    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:46.698063    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:46.709603    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:46.709618    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:46.723866    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:46.723881    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:46.743287    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:46.743297    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:46.767654    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:46.767669    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:46.772579    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:46.772586    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:46.789351    9085 logs.go:123] Gathering logs for coredns [ac67f2851614] ...
	I0805 10:46:46.789364    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac67f2851614"
	I0805 10:46:46.800649    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:46.800663    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:46.812179    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:46.812195    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:46.827013    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:46.827027    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:49.341110    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:54.343388    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:54.346948    9085 out.go:177] 
	W0805 10:46:54.350955    9085 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0805 10:46:54.350970    9085 out.go:239] * 
	* 
	W0805 10:46:54.351931    9085 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:46:54.363002    9085 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-952000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-05 10:46:54.454543 -0700 PDT m=+1285.171524251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-952000 -n running-upgrade-952000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-952000 -n running-upgrade-952000: exit status 2 (15.718322792s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-952000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-810000 sudo                                | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo                                | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo cat                            | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo cat                            | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo                                | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo                                | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo                                | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo cat                            | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo cat                            | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo                                | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo                                | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo                                | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo find                           | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-810000 sudo crio                           | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-810000                                     | cilium-810000             | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT | 05 Aug 24 10:36 PDT |
	| start   | -p kubernetes-upgrade-234000                         | kubernetes-upgrade-234000 | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-828000                             | offline-docker-828000     | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT | 05 Aug 24 10:36 PDT |
	| stop    | -p kubernetes-upgrade-234000                         | kubernetes-upgrade-234000 | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT | 05 Aug 24 10:36 PDT |
	| start   | -p stopped-upgrade-363000                            | minikube                  | jenkins | v1.26.0 | 05 Aug 24 10:36 PDT | 05 Aug 24 10:37 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-234000                         | kubernetes-upgrade-234000 | jenkins | v1.33.1 | 05 Aug 24 10:36 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                    |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-234000                         | kubernetes-upgrade-234000 | jenkins | v1.33.1 | 05 Aug 24 10:37 PDT | 05 Aug 24 10:37 PDT |
	| start   | -p running-upgrade-952000                            | minikube                  | jenkins | v1.26.0 | 05 Aug 24 10:37 PDT | 05 Aug 24 10:38 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-363000 stop                          | minikube                  | jenkins | v1.26.0 | 05 Aug 24 10:37 PDT | 05 Aug 24 10:38 PDT |
	| start   | -p stopped-upgrade-363000                            | stopped-upgrade-363000    | jenkins | v1.33.1 | 05 Aug 24 10:38 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-952000                            | running-upgrade-952000    | jenkins | v1.33.1 | 05 Aug 24 10:38 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 10:38:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 10:38:12.339558    9085 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:38:12.339720    9085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:38:12.339723    9085 out.go:304] Setting ErrFile to fd 2...
	I0805 10:38:12.339726    9085 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:38:12.339879    9085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:38:12.341444    9085 out.go:298] Setting JSON to false
	I0805 10:38:12.361627    9085 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5862,"bootTime":1722873630,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:38:12.361731    9085 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:38:12.366467    9085 out.go:177] * [running-upgrade-952000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:38:12.373543    9085 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:38:12.373651    9085 notify.go:220] Checking for updates...
	I0805 10:38:12.379444    9085 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:38:12.382436    9085 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:38:12.383768    9085 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:38:12.386401    9085 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:38:12.389485    9085 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:38:12.392760    9085 config.go:182] Loaded profile config "running-upgrade-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 10:38:12.395413    9085 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 10:38:12.398451    9085 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:38:12.402461    9085 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:38:12.409452    9085 start.go:297] selected driver: qemu2
	I0805 10:38:12.409462    9085 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51256 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 10:38:12.409513    9085 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:38:12.412027    9085 cni.go:84] Creating CNI manager for ""
	I0805 10:38:12.412044    9085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:38:12.412076    9085 start.go:340] cluster config:
	{Name:running-upgrade-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51256 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 10:38:12.412136    9085 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:38:12.419381    9085 out.go:177] * Starting "running-upgrade-952000" primary control-plane node in "running-upgrade-952000" cluster
	I0805 10:38:12.423417    9085 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 10:38:12.423432    9085 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0805 10:38:12.423443    9085 cache.go:56] Caching tarball of preloaded images
	I0805 10:38:12.423499    9085 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:38:12.423505    9085 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0805 10:38:12.423550    9085 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/config.json ...
	I0805 10:38:12.423910    9085 start.go:360] acquireMachinesLock for running-upgrade-952000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:38:20.311101    9068 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/config.json ...
	I0805 10:38:20.311448    9068 machine.go:94] provisionDockerMachine start ...
	I0805 10:38:20.311539    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.311763    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.311769    9068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 10:38:20.383542    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 10:38:20.383560    9068 buildroot.go:166] provisioning hostname "stopped-upgrade-363000"
	I0805 10:38:20.383639    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.383768    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.383773    9068 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-363000 && echo "stopped-upgrade-363000" | sudo tee /etc/hostname
	I0805 10:38:20.456069    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-363000
	
	I0805 10:38:20.456121    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.456240    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.456251    9068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-363000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-363000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-363000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 10:38:20.530105    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 10:38:20.530118    9068 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19374-6507/.minikube CaCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19374-6507/.minikube}
	I0805 10:38:20.530124    9068 buildroot.go:174] setting up certificates
	I0805 10:38:20.530128    9068 provision.go:84] configureAuth start
	I0805 10:38:20.530132    9068 provision.go:143] copyHostCerts
	I0805 10:38:20.530207    9068 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem, removing ...
	I0805 10:38:20.530213    9068 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem
	I0805 10:38:20.530314    9068 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem (1123 bytes)
	I0805 10:38:20.530513    9068 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem, removing ...
	I0805 10:38:20.530516    9068 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem
	I0805 10:38:20.530560    9068 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem (1679 bytes)
	I0805 10:38:20.530656    9068 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem, removing ...
	I0805 10:38:20.530659    9068 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem
	I0805 10:38:20.530706    9068 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem (1082 bytes)
	I0805 10:38:20.530807    9068 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-363000 san=[127.0.0.1 localhost minikube stopped-upgrade-363000]
	I0805 10:38:21.327607    9085 start.go:364] duration metric: took 8.903805959s to acquireMachinesLock for "running-upgrade-952000"
	I0805 10:38:21.327632    9085 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:38:21.327639    9085 fix.go:54] fixHost starting: 
	I0805 10:38:21.328275    9085 fix.go:112] recreateIfNeeded on running-upgrade-952000: state=Running err=<nil>
	W0805 10:38:21.328286    9085 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:38:21.336425    9085 out.go:177] * Updating the running qemu2 "running-upgrade-952000" VM ...
	I0805 10:38:21.340391    9085 machine.go:94] provisionDockerMachine start ...
	I0805 10:38:21.340480    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.340641    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.340645    9085 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 10:38:21.412639    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-952000
	
	I0805 10:38:21.412654    9085 buildroot.go:166] provisioning hostname "running-upgrade-952000"
	I0805 10:38:21.412696    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.412823    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.412829    9085 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-952000 && echo "running-upgrade-952000" | sudo tee /etc/hostname
	I0805 10:38:21.487855    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-952000
	
	I0805 10:38:21.487933    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.488056    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.488066    9085 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-952000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-952000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-952000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 10:38:21.559297    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 10:38:21.559310    9085 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19374-6507/.minikube CaCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19374-6507/.minikube}
	I0805 10:38:21.559320    9085 buildroot.go:174] setting up certificates
	I0805 10:38:21.559328    9085 provision.go:84] configureAuth start
	I0805 10:38:21.559336    9085 provision.go:143] copyHostCerts
	I0805 10:38:21.559402    9085 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem, removing ...
	I0805 10:38:21.559412    9085 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem
	I0805 10:38:21.559535    9085 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem (1082 bytes)
	I0805 10:38:21.559700    9085 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem, removing ...
	I0805 10:38:21.559704    9085 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem
	I0805 10:38:21.559749    9085 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem (1123 bytes)
	I0805 10:38:21.559879    9085 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem, removing ...
	I0805 10:38:21.559884    9085 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem
	I0805 10:38:21.559923    9085 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem (1679 bytes)
	I0805 10:38:21.560019    9085 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-952000 san=[127.0.0.1 localhost minikube running-upgrade-952000]
	I0805 10:38:21.637312    9085 provision.go:177] copyRemoteCerts
	I0805 10:38:21.637347    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 10:38:21.637355    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	I0805 10:38:21.675733    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 10:38:21.682924    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0805 10:38:21.689725    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 10:38:21.696361    9085 provision.go:87] duration metric: took 137.02725ms to configureAuth
	I0805 10:38:21.696370    9085 buildroot.go:189] setting minikube options for container-runtime
	I0805 10:38:21.696495    9085 config.go:182] Loaded profile config "running-upgrade-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 10:38:21.696536    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.696621    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.696625    9085 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 10:38:21.771230    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 10:38:21.771249    9085 buildroot.go:70] root file system type: tmpfs
	I0805 10:38:21.771298    9085 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 10:38:21.771368    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.771501    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.771535    9085 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 10:38:21.859897    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 10:38:21.859958    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.860082    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.860091    9085 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 10:38:21.934998    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 10:38:21.935011    9085 machine.go:97] duration metric: took 594.621417ms to provisionDockerMachine
	I0805 10:38:21.935017    9085 start.go:293] postStartSetup for "running-upgrade-952000" (driver="qemu2")
	I0805 10:38:21.935023    9085 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 10:38:21.935076    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 10:38:21.935085    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	I0805 10:38:21.972983    9085 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 10:38:21.974599    9085 info.go:137] Remote host: Buildroot 2021.02.12
	I0805 10:38:21.974606    9085 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19374-6507/.minikube/addons for local assets ...
	I0805 10:38:21.974691    9085 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19374-6507/.minikube/files for local assets ...
	I0805 10:38:21.974804    9085 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem -> 70072.pem in /etc/ssl/certs
	I0805 10:38:21.974931    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 10:38:21.977674    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem --> /etc/ssl/certs/70072.pem (1708 bytes)
	I0805 10:38:21.984818    9085 start.go:296] duration metric: took 49.797ms for postStartSetup
	I0805 10:38:21.984833    9085 fix.go:56] duration metric: took 657.205666ms for fixHost
	I0805 10:38:21.984871    9085 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.984984    9085 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10270ea10] 0x102711270 <nil>  [] 0s} localhost 51192 <nil> <nil>}
	I0805 10:38:21.984989    9085 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 10:38:22.057097    9085 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722879502.514806643
	
	I0805 10:38:22.057111    9085 fix.go:216] guest clock: 1722879502.514806643
	I0805 10:38:22.057115    9085 fix.go:229] Guest: 2024-08-05 10:38:22.514806643 -0700 PDT Remote: 2024-08-05 10:38:21.984835 -0700 PDT m=+9.669283709 (delta=529.971643ms)
	I0805 10:38:22.057127    9085 fix.go:200] guest clock delta is within tolerance: 529.971643ms
	I0805 10:38:22.057130    9085 start.go:83] releasing machines lock for "running-upgrade-952000", held for 729.516875ms
	I0805 10:38:22.057191    9085 ssh_runner.go:195] Run: cat /version.json
	I0805 10:38:22.057196    9085 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 10:38:22.057201    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	I0805 10:38:22.057212    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	W0805 10:38:22.057886    9085 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51192: connect: connection refused
	I0805 10:38:22.057909    9085 retry.go:31] will retry after 262.220624ms: dial tcp [::1]:51192: connect: connection refused
	I0805 10:38:20.655365    9068 provision.go:177] copyRemoteCerts
	I0805 10:38:20.655403    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 10:38:20.655415    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	I0805 10:38:20.694554    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 10:38:20.701913    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0805 10:38:20.709138    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 10:38:20.716292    9068 provision.go:87] duration metric: took 186.161625ms to configureAuth
	I0805 10:38:20.716300    9068 buildroot.go:189] setting minikube options for container-runtime
	I0805 10:38:20.716419    9068 config.go:182] Loaded profile config "stopped-upgrade-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 10:38:20.716462    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.716556    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.716561    9068 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 10:38:20.784131    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 10:38:20.784141    9068 buildroot.go:70] root file system type: tmpfs
	I0805 10:38:20.784196    9068 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 10:38:20.784255    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.784382    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.784416    9068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 10:38:20.855853    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 10:38:20.855908    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.856026    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.856037    9068 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 10:38:21.211532    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 10:38:21.211545    9068 machine.go:97] duration metric: took 900.101583ms to provisionDockerMachine
	I0805 10:38:21.211551    9068 start.go:293] postStartSetup for "stopped-upgrade-363000" (driver="qemu2")
	I0805 10:38:21.211558    9068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 10:38:21.211616    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 10:38:21.211627    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	I0805 10:38:21.249114    9068 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 10:38:21.250534    9068 info.go:137] Remote host: Buildroot 2021.02.12
	I0805 10:38:21.250542    9068 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19374-6507/.minikube/addons for local assets ...
	I0805 10:38:21.250609    9068 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19374-6507/.minikube/files for local assets ...
	I0805 10:38:21.250694    9068 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem -> 70072.pem in /etc/ssl/certs
	I0805 10:38:21.250787    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 10:38:21.253922    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem --> /etc/ssl/certs/70072.pem (1708 bytes)
	I0805 10:38:21.261063    9068 start.go:296] duration metric: took 49.507125ms for postStartSetup
	I0805 10:38:21.261077    9068 fix.go:56] duration metric: took 20.625449958s for fixHost
	I0805 10:38:21.261110    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.261211    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:21.261216    9068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 10:38:21.327551    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722879501.820092296
	
	I0805 10:38:21.327557    9068 fix.go:216] guest clock: 1722879501.820092296
	I0805 10:38:21.327561    9068 fix.go:229] Guest: 2024-08-05 10:38:21.820092296 -0700 PDT Remote: 2024-08-05 10:38:21.261079 -0700 PDT m=+20.728287585 (delta=559.013296ms)
	I0805 10:38:21.327572    9068 fix.go:200] guest clock delta is within tolerance: 559.013296ms
	I0805 10:38:21.327575    9068 start.go:83] releasing machines lock for "stopped-upgrade-363000", held for 20.691955833s
	I0805 10:38:21.327643    9068 ssh_runner.go:195] Run: cat /version.json
	I0805 10:38:21.327650    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	I0805 10:38:21.327658    9068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 10:38:21.327676    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	W0805 10:38:21.328227    9068 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51341->127.0.0.1:51155: write: broken pipe
	I0805 10:38:21.328242    9068 retry.go:31] will retry after 228.558254ms: ssh: handshake failed: write tcp 127.0.0.1:51341->127.0.0.1:51155: write: broken pipe
	W0805 10:38:21.363491    9068 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0805 10:38:21.363550    9068 ssh_runner.go:195] Run: systemctl --version
	I0805 10:38:21.365329    9068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 10:38:21.367015    9068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 10:38:21.367044    9068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0805 10:38:21.370052    9068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0805 10:38:21.374862    9068 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 10:38:21.374872    9068 start.go:495] detecting cgroup driver to use...
	I0805 10:38:21.374983    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 10:38:21.382646    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0805 10:38:21.385921    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 10:38:21.389129    9068 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 10:38:21.389149    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 10:38:21.392479    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 10:38:21.395674    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 10:38:21.398480    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 10:38:21.401465    9068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 10:38:21.404891    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 10:38:21.407693    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 10:38:21.410458    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 10:38:21.414104    9068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 10:38:21.417527    9068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 10:38:21.420678    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:21.502257    9068 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 10:38:21.514420    9068 start.go:495] detecting cgroup driver to use...
	I0805 10:38:21.514493    9068 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 10:38:21.526253    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 10:38:21.531524    9068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 10:38:21.539617    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 10:38:21.544324    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 10:38:21.548920    9068 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 10:38:21.605627    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 10:38:21.645250    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 10:38:21.651213    9068 ssh_runner.go:195] Run: which cri-dockerd
	I0805 10:38:21.652435    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 10:38:21.654966    9068 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 10:38:21.660207    9068 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 10:38:21.744530    9068 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 10:38:21.821485    9068 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 10:38:21.821539    9068 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 10:38:21.827165    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:21.894989    9068 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 10:38:23.007062    9068 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.112061542s)
	I0805 10:38:23.007167    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 10:38:23.016648    9068 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 10:38:23.022485    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 10:38:23.028005    9068 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 10:38:23.097782    9068 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 10:38:23.164188    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:23.249192    9068 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 10:38:23.256234    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 10:38:23.260859    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:23.320630    9068 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 10:38:23.362238    9068 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 10:38:23.362318    9068 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 10:38:23.364583    9068 start.go:563] Will wait 60s for crictl version
	I0805 10:38:23.364695    9068 ssh_runner.go:195] Run: which crictl
	I0805 10:38:23.366119    9068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 10:38:23.381551    9068 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0805 10:38:23.381629    9068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 10:38:23.399801    9068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 10:38:23.417691    9068 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0805 10:38:23.417761    9068 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0805 10:38:23.419375    9068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 10:38:23.423112    9068 kubeadm.go:883] updating cluster {Name:stopped-upgrade-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51187 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0805 10:38:23.423166    9068 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 10:38:23.423211    9068 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 10:38:23.433743    9068 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 10:38:23.433759    9068 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 10:38:23.433804    9068 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 10:38:23.437110    9068 ssh_runner.go:195] Run: which lz4
	I0805 10:38:23.438325    9068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 10:38:23.439699    9068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 10:38:23.439710    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0805 10:38:24.385779    9068 docker.go:649] duration metric: took 947.494584ms to copy over tarball
	I0805 10:38:24.385837    9068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 10:38:25.545531    9068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.159693958s)
	I0805 10:38:25.545545    9068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	W0805 10:38:22.361101    9085 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0805 10:38:22.361168    9085 ssh_runner.go:195] Run: systemctl --version
	I0805 10:38:22.363271    9085 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 10:38:22.365099    9085 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 10:38:22.365126    9085 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0805 10:38:22.368213    9085 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0805 10:38:22.373115    9085 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 10:38:22.373124    9085 start.go:495] detecting cgroup driver to use...
	I0805 10:38:22.373192    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 10:38:22.378245    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0805 10:38:22.381111    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 10:38:22.384303    9085 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 10:38:22.384329    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 10:38:22.387400    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 10:38:22.390557    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 10:38:22.395239    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 10:38:22.398394    9085 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 10:38:22.401225    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 10:38:22.404305    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 10:38:22.407796    9085 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 10:38:22.411098    9085 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 10:38:22.413967    9085 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 10:38:22.416648    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:22.496655    9085 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 10:38:22.502584    9085 start.go:495] detecting cgroup driver to use...
	I0805 10:38:22.502653    9085 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 10:38:22.511006    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 10:38:22.517728    9085 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 10:38:22.526200    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 10:38:22.530576    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 10:38:22.534788    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 10:38:22.540412    9085 ssh_runner.go:195] Run: which cri-dockerd
	I0805 10:38:22.541606    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 10:38:22.544715    9085 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 10:38:22.549428    9085 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 10:38:22.628192    9085 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 10:38:22.706284    9085 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 10:38:22.706340    9085 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 10:38:22.711611    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:22.787528    9085 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 10:38:25.561664    9068 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 10:38:25.564722    9068 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0805 10:38:25.569358    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:25.650494    9068 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 10:38:27.301081    9068 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.650593375s)
	I0805 10:38:27.301179    9068 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 10:38:27.313820    9068 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 10:38:27.313845    9068 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 10:38:27.313853    9068 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 10:38:27.318763    9068 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:27.321015    9068 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:27.323406    9068 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:27.323563    9068 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:27.325941    9068 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 10:38:27.326151    9068 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:27.327665    9068 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:27.327753    9068 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:27.329088    9068 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:27.329315    9068 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 10:38:27.329906    9068 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:27.330595    9068 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:27.331804    9068 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:27.331890    9068 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:27.332220    9068 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:27.333209    9068 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:27.765065    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0805 10:38:27.771435    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:27.779233    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:27.789964    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:27.804316    9068 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0805 10:38:27.804338    9068 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0805 10:38:27.804342    9068 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0805 10:38:27.804349    9068 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:27.804400    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:27.804400    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0805 10:38:27.807484    9068 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0805 10:38:27.807501    9068 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:27.807538    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:27.813638    9068 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0805 10:38:27.813661    9068 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:27.813740    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:27.816656    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:27.822929    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 10:38:27.823057    9068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0805 10:38:27.826612    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0805 10:38:27.830073    9068 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 10:38:27.830192    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:27.838046    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0805 10:38:27.839332    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0805 10:38:27.839801    9068 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0805 10:38:27.839818    9068 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:27.839840    9068 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0805 10:38:27.839881    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0805 10:38:27.839867    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:27.849090    9068 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0805 10:38:27.849113    9068 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:27.849165    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:27.854799    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0805 10:38:27.858677    9068 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0805 10:38:27.858698    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0805 10:38:27.860346    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 10:38:27.860459    9068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0805 10:38:27.882692    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:27.890061    9068 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0805 10:38:27.890088    9068 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0805 10:38:27.890112    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0805 10:38:27.897346    9068 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0805 10:38:27.897367    9068 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:27.897420    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:27.910287    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 10:38:27.910423    9068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0805 10:38:27.915309    9068 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0805 10:38:27.915342    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0805 10:38:27.946071    9068 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0805 10:38:27.946085    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0805 10:38:28.026363    9068 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0805 10:38:28.177559    9068 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0805 10:38:28.177573    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0805 10:38:28.302613    9068 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 10:38:28.302717    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:28.330295    9068 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0805 10:38:28.330572    9068 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0805 10:38:28.330595    9068 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:28.330644    9068 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:28.344934    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 10:38:28.345057    9068 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0805 10:38:28.346357    9068 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0805 10:38:28.346373    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0805 10:38:28.375802    9068 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 10:38:28.375816    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0805 10:38:28.608809    9068 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 10:38:28.608847    9068 cache_images.go:92] duration metric: took 1.295004625s to LoadCachedImages
	W0805 10:38:28.608885    9068 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0805 10:38:28.608891    9068 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0805 10:38:28.608964    9068 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-363000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 10:38:28.609035    9068 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 10:38:28.622219    9068 cni.go:84] Creating CNI manager for ""
	I0805 10:38:28.622233    9068 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:38:28.622237    9068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 10:38:28.622246    9068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-363000 NodeName:stopped-upgrade-363000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 10:38:28.622312    9068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-363000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 10:38:28.622368    9068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0805 10:38:28.625710    9068 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 10:38:28.625735    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 10:38:28.628733    9068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0805 10:38:28.633786    9068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 10:38:28.638775    9068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0805 10:38:28.644042    9068 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0805 10:38:28.645262    9068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 10:38:28.648837    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:28.737338    9068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 10:38:28.743526    9068 certs.go:68] Setting up /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000 for IP: 10.0.2.15
	I0805 10:38:28.743533    9068 certs.go:194] generating shared ca certs ...
	I0805 10:38:28.743542    9068 certs.go:226] acquiring lock for ca certs: {Name:mkd94903be2cadc29e0a5fb0c61367bd1b12d51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:28.743816    9068 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.key
	I0805 10:38:28.743874    9068 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.key
	I0805 10:38:28.743879    9068 certs.go:256] generating profile certs ...
	I0805 10:38:28.743961    9068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/client.key
	I0805 10:38:28.743975    9068 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key.98c64959
	I0805 10:38:28.743990    9068 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt.98c64959 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0805 10:38:28.804850    9068 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt.98c64959 ...
	I0805 10:38:28.804864    9068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt.98c64959: {Name:mkaa3b075e5add0a05595241adf2a23d191578fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:28.805187    9068 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key.98c64959 ...
	I0805 10:38:28.805192    9068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key.98c64959: {Name:mkccc30ab8922f1da13a0605c91820e5e1a3b3cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:28.805328    9068 certs.go:381] copying /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt.98c64959 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt
	I0805 10:38:28.805467    9068 certs.go:385] copying /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key.98c64959 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key
	I0805 10:38:28.805624    9068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/proxy-client.key
	I0805 10:38:28.805764    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007.pem (1338 bytes)
	W0805 10:38:28.805794    9068 certs.go:480] ignoring /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007_empty.pem, impossibly tiny 0 bytes
	I0805 10:38:28.805800    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem (1675 bytes)
	I0805 10:38:28.805829    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem (1082 bytes)
	I0805 10:38:28.805857    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem (1123 bytes)
	I0805 10:38:28.805884    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem (1679 bytes)
	I0805 10:38:28.805939    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem (1708 bytes)
	I0805 10:38:28.806332    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 10:38:28.813161    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 10:38:28.819672    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 10:38:28.826415    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 10:38:28.833459    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 10:38:28.840670    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 10:38:28.847325    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 10:38:28.853931    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 10:38:28.861282    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem --> /usr/share/ca-certificates/70072.pem (1708 bytes)
	I0805 10:38:28.868294    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 10:38:28.874697    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007.pem --> /usr/share/ca-certificates/7007.pem (1338 bytes)
	I0805 10:38:28.881612    9068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 10:38:28.886628    9068 ssh_runner.go:195] Run: openssl version
	I0805 10:38:28.888367    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70072.pem && ln -fs /usr/share/ca-certificates/70072.pem /etc/ssl/certs/70072.pem"
	I0805 10:38:28.891328    9068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70072.pem
	I0805 10:38:28.892607    9068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 17:26 /usr/share/ca-certificates/70072.pem
	I0805 10:38:28.892626    9068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70072.pem
	I0805 10:38:28.894282    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70072.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 10:38:28.897629    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 10:38:28.900771    9068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:28.902098    9068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:28.902119    9068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:28.903863    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 10:38:28.906715    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7007.pem && ln -fs /usr/share/ca-certificates/7007.pem /etc/ssl/certs/7007.pem"
	I0805 10:38:28.909850    9068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7007.pem
	I0805 10:38:28.911232    9068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 17:26 /usr/share/ca-certificates/7007.pem
	I0805 10:38:28.911249    9068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7007.pem
	I0805 10:38:28.912933    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7007.pem /etc/ssl/certs/51391683.0"
	I0805 10:38:28.915931    9068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 10:38:28.917241    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 10:38:28.919341    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 10:38:28.921074    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 10:38:28.923380    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 10:38:28.925035    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 10:38:28.926685    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 10:38:28.928506    9068 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51187 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 10:38:28.928586    9068 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 10:38:28.938944    9068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 10:38:28.941948    9068 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 10:38:28.941959    9068 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 10:38:28.941984    9068 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 10:38:28.944709    9068 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:28.944745    9068 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-363000" does not appear in /Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:38:28.944759    9068 kubeconfig.go:62] /Users/jenkins/minikube-integration/19374-6507/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-363000" cluster setting kubeconfig missing "stopped-upgrade-363000" context setting]
	I0805 10:38:28.944931    9068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/kubeconfig: {Name:mkf52f0a49b2ae63f3d2905c5633513b3086a0af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:28.945614    9068 kapi.go:59] client config for stopped-upgrade-363000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/client.key", CAFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1019d02e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 10:38:28.946438    9068 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 10:38:28.949060    9068 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-363000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0805 10:38:28.949065    9068 kubeadm.go:1160] stopping kube-system containers ...
	I0805 10:38:28.949102    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 10:38:28.959580    9068 docker.go:483] Stopping containers: [91f7a199884a 6b94189c4353 e1cc9e5e2f59 3c41f12d029f 636206c34e2e a93dea7a5880 d01ea66fa9b2 0083d25943ab]
	I0805 10:38:28.959641    9068 ssh_runner.go:195] Run: docker stop 91f7a199884a 6b94189c4353 e1cc9e5e2f59 3c41f12d029f 636206c34e2e a93dea7a5880 d01ea66fa9b2 0083d25943ab
	I0805 10:38:28.970025    9068 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 10:38:28.975565    9068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 10:38:28.978278    9068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 10:38:28.978284    9068 kubeadm.go:157] found existing configuration files:
	
	I0805 10:38:28.978306    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/admin.conf
	I0805 10:38:28.981156    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 10:38:28.981177    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 10:38:28.983711    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/kubelet.conf
	I0805 10:38:28.986037    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 10:38:28.986057    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 10:38:28.988914    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/controller-manager.conf
	I0805 10:38:28.991528    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 10:38:28.991547    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 10:38:28.994050    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/scheduler.conf
	I0805 10:38:28.996874    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 10:38:28.996895    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 10:38:28.999298    9068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 10:38:29.002116    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:29.025344    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:29.436613    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:29.573780    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:29.604987    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:29.640014    9068 api_server.go:52] waiting for apiserver process to appear ...
	I0805 10:38:29.640093    9068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:30.142187    9068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:30.642216    9068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:30.646459    9068 api_server.go:72] duration metric: took 1.006459167s to wait for apiserver process to appear ...
	I0805 10:38:30.646467    9068 api_server.go:88] waiting for apiserver healthz status ...
	I0805 10:38:30.646477    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:35.459269    9085 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.67189075s)
	I0805 10:38:35.459350    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 10:38:35.464208    9085 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 10:38:35.472437    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 10:38:35.478070    9085 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 10:38:35.571486    9085 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 10:38:35.636233    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:35.701326    9085 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 10:38:35.707587    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 10:38:35.712354    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:35.776486    9085 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 10:38:35.817691    9085 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 10:38:35.817758    9085 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 10:38:35.819954    9085 start.go:563] Will wait 60s for crictl version
	I0805 10:38:35.820009    9085 ssh_runner.go:195] Run: which crictl
	I0805 10:38:35.821428    9085 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 10:38:35.833460    9085 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0805 10:38:35.833523    9085 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 10:38:35.847218    9085 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 10:38:35.864984    9085 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0805 10:38:35.865050    9085 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0805 10:38:35.866439    9085 kubeadm.go:883] updating cluster {Name:running-upgrade-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51256 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0805 10:38:35.866485    9085 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 10:38:35.866524    9085 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 10:38:35.877333    9085 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 10:38:35.877341    9085 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 10:38:35.877383    9085 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 10:38:35.880512    9085 ssh_runner.go:195] Run: which lz4
	I0805 10:38:35.881945    9085 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 10:38:35.883272    9085 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 10:38:35.883283    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0805 10:38:36.841804    9085 docker.go:649] duration metric: took 959.902792ms to copy over tarball
	I0805 10:38:36.841863    9085 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 10:38:35.648516    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:35.648533    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:38.180777    9085 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.338918625s)
	I0805 10:38:38.180810    9085 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 10:38:38.196691    9085 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 10:38:38.200049    9085 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0805 10:38:38.205000    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:38.268995    9085 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 10:38:39.486144    9085 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.217149s)
	I0805 10:38:39.486254    9085 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 10:38:39.499952    9085 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 10:38:39.499969    9085 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 10:38:39.499973    9085 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 10:38:39.503975    9085 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:39.505389    9085 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:39.507495    9085 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:39.507944    9085 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:39.510404    9085 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:39.510420    9085 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:39.511695    9085 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:39.512088    9085 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:39.513301    9085 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:39.513522    9085 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:39.514614    9085 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:39.514632    9085 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:39.515826    9085 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 10:38:39.515962    9085 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:39.516981    9085 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:39.517718    9085 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 10:38:39.942753    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:39.942753    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:39.947674    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:39.959826    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:39.969438    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:39.974564    9085 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0805 10:38:39.974581    9085 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0805 10:38:39.974594    9085 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:39.974594    9085 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:39.974641    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:39.974641    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:39.975544    9085 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0805 10:38:39.975561    9085 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:39.975590    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0805 10:38:39.977063    9085 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 10:38:39.977295    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:39.978721    9085 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0805 10:38:39.978738    9085 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:39.978774    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:40.001918    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0805 10:38:40.017878    9085 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0805 10:38:40.017905    9085 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:40.017966    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0805 10:38:40.017968    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:40.031005    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0805 10:38:40.031030    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0805 10:38:40.031092    9085 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0805 10:38:40.031106    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0805 10:38:40.031108    9085 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:40.031149    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:40.038053    9085 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0805 10:38:40.038070    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 10:38:40.038072    9085 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0805 10:38:40.038117    9085 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0805 10:38:40.038165    9085 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0805 10:38:40.043506    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 10:38:40.043561    9085 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0805 10:38:40.043573    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0805 10:38:40.043607    9085 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0805 10:38:40.058285    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 10:38:40.058296    9085 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0805 10:38:40.058310    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0805 10:38:40.058394    9085 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0805 10:38:40.062943    9085 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0805 10:38:40.062976    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0805 10:38:40.108802    9085 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0805 10:38:40.108824    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W0805 10:38:40.160283    9085 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 10:38:40.160382    9085 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:40.215793    9085 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0805 10:38:40.215814    9085 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0805 10:38:40.215820    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0805 10:38:40.215864    9085 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0805 10:38:40.215879    9085 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:40.215941    9085 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:40.339918    9085 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0805 10:38:40.340001    9085 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 10:38:40.340103    9085 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0805 10:38:40.343369    9085 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0805 10:38:40.343388    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0805 10:38:40.421495    9085 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 10:38:40.421511    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0805 10:38:40.922761    9085 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 10:38:40.922785    9085 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0805 10:38:40.922792    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0805 10:38:41.290006    9085 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0805 10:38:41.290057    9085 cache_images.go:92] duration metric: took 1.790100583s to LoadCachedImages
	W0805 10:38:41.290103    9085 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1: no such file or directory
	I0805 10:38:41.290111    9085 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0805 10:38:41.290164    9085 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-952000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 10:38:41.290250    9085 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 10:38:41.358858    9085 cni.go:84] Creating CNI manager for ""
	I0805 10:38:41.358875    9085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:38:41.358880    9085 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 10:38:41.358889    9085 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-952000 NodeName:running-upgrade-952000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 10:38:41.358959    9085 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-952000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 10:38:41.359023    9085 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0805 10:38:41.362113    9085 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 10:38:41.362144    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 10:38:41.369503    9085 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0805 10:38:41.378689    9085 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 10:38:41.389111    9085 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0805 10:38:41.403263    9085 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0805 10:38:41.405544    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:41.527140    9085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 10:38:41.536476    9085 certs.go:68] Setting up /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000 for IP: 10.0.2.15
	I0805 10:38:41.536489    9085 certs.go:194] generating shared ca certs ...
	I0805 10:38:41.536502    9085 certs.go:226] acquiring lock for ca certs: {Name:mkd94903be2cadc29e0a5fb0c61367bd1b12d51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:41.536663    9085 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.key
	I0805 10:38:41.536699    9085 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.key
	I0805 10:38:41.536704    9085 certs.go:256] generating profile certs ...
	I0805 10:38:41.536792    9085 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/client.key
	I0805 10:38:41.536811    9085 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key.40e1dc2a
	I0805 10:38:41.536823    9085 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt.40e1dc2a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0805 10:38:41.663164    9085 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt.40e1dc2a ...
	I0805 10:38:41.663179    9085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt.40e1dc2a: {Name:mkc97aa80e1eca14446267d385a711ca3d848970 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:41.663417    9085 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key.40e1dc2a ...
	I0805 10:38:41.663422    9085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key.40e1dc2a: {Name:mkbf77a8c5e9db24027092d24de75eec96aed14a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:41.663550    9085 certs.go:381] copying /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt.40e1dc2a -> /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt
	I0805 10:38:41.663682    9085 certs.go:385] copying /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key.40e1dc2a -> /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key
	I0805 10:38:41.663845    9085 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/proxy-client.key
	I0805 10:38:41.663988    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007.pem (1338 bytes)
	W0805 10:38:41.664024    9085 certs.go:480] ignoring /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007_empty.pem, impossibly tiny 0 bytes
	I0805 10:38:41.664031    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem (1675 bytes)
	I0805 10:38:41.664059    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem (1082 bytes)
	I0805 10:38:41.664078    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem (1123 bytes)
	I0805 10:38:41.664095    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem (1679 bytes)
	I0805 10:38:41.664137    9085 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem (1708 bytes)
	I0805 10:38:41.664498    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 10:38:41.675442    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 10:38:41.682228    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 10:38:41.689674    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 10:38:41.697756    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 10:38:41.705512    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 10:38:41.713467    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 10:38:41.721975    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 10:38:41.747102    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem --> /usr/share/ca-certificates/70072.pem (1708 bytes)
	I0805 10:38:41.754260    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 10:38:41.761332    9085 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007.pem --> /usr/share/ca-certificates/7007.pem (1338 bytes)
	I0805 10:38:41.768229    9085 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 10:38:41.773264    9085 ssh_runner.go:195] Run: openssl version
	I0805 10:38:41.775064    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 10:38:41.778514    9085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:41.779956    9085 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:41.779978    9085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:41.781926    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 10:38:41.784596    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7007.pem && ln -fs /usr/share/ca-certificates/7007.pem /etc/ssl/certs/7007.pem"
	I0805 10:38:41.787865    9085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7007.pem
	I0805 10:38:41.789381    9085 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 17:26 /usr/share/ca-certificates/7007.pem
	I0805 10:38:41.789400    9085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7007.pem
	I0805 10:38:41.791058    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7007.pem /etc/ssl/certs/51391683.0"
	I0805 10:38:41.793831    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70072.pem && ln -fs /usr/share/ca-certificates/70072.pem /etc/ssl/certs/70072.pem"
	I0805 10:38:41.796859    9085 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70072.pem
	I0805 10:38:41.798284    9085 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 17:26 /usr/share/ca-certificates/70072.pem
	I0805 10:38:41.798304    9085 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70072.pem
	I0805 10:38:41.800175    9085 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70072.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 10:38:41.803360    9085 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 10:38:41.804823    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 10:38:41.806600    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 10:38:41.808242    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 10:38:41.809992    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 10:38:41.812212    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 10:38:41.814045    9085 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 10:38:41.815692    9085 kubeadm.go:392] StartCluster: {Name:running-upgrade-952000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51256 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 10:38:41.815761    9085 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 10:38:41.825934    9085 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 10:38:41.829099    9085 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 10:38:41.829106    9085 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 10:38:41.829132    9085 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 10:38:41.832407    9085 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:41.832701    9085 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-952000" does not appear in /Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:38:41.832801    9085 kubeconfig.go:62] /Users/jenkins/minikube-integration/19374-6507/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-952000" cluster setting kubeconfig missing "running-upgrade-952000" context setting]
	I0805 10:38:41.833003    9085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/kubeconfig: {Name:mkf52f0a49b2ae63f3d2905c5633513b3086a0af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:41.833408    9085 kapi.go:59] client config for running-upgrade-952000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/client.key", CAFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103aa42e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 10:38:41.833717    9085 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 10:38:41.836542    9085 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-952000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0805 10:38:41.836547    9085 kubeadm.go:1160] stopping kube-system containers ...
	I0805 10:38:41.836585    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 10:38:41.848399    9085 docker.go:483] Stopping containers: [13057f94c0f8 0d601c57878c 2c9aa7466dbd 7c8977dcd66d ff96720b9db2 9da153cbe1d1 2fff138ae5b4 9407b8a7dc24 ab057fb8fb35 17eaa61951a4 0a313728fc22 247cfaee0b9f a3cb59bff14b dce841a2a196 32999ae77620 d71cd5277bf8 7236f9259973 ff2c23088238 380ce6aa9a95 f532748d5913]
	I0805 10:38:41.848470    9085 ssh_runner.go:195] Run: docker stop 13057f94c0f8 0d601c57878c 2c9aa7466dbd 7c8977dcd66d ff96720b9db2 9da153cbe1d1 2fff138ae5b4 9407b8a7dc24 ab057fb8fb35 17eaa61951a4 0a313728fc22 247cfaee0b9f a3cb59bff14b dce841a2a196 32999ae77620 d71cd5277bf8 7236f9259973 ff2c23088238 380ce6aa9a95 f532748d5913
	I0805 10:38:41.977051    9085 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 10:38:42.048629    9085 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 10:38:42.055107    9085 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug  5 17:38 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug  5 17:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug  5 17:38 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug  5 17:38 /etc/kubernetes/scheduler.conf
	
	I0805 10:38:42.055160    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/admin.conf
	I0805 10:38:42.061518    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:42.061555    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 10:38:42.064586    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/kubelet.conf
	I0805 10:38:42.067636    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:42.067664    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 10:38:42.073478    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/controller-manager.conf
	I0805 10:38:42.077980    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:42.078007    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 10:38:42.081318    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/scheduler.conf
	I0805 10:38:42.084536    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:42.084568    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 10:38:42.090023    9085 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 10:38:42.096201    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:42.139244    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:40.648658    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:40.648686    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:42.666680    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:42.889162    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:42.917386    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:42.944325    9085 api_server.go:52] waiting for apiserver process to appear ...
	I0805 10:38:42.944398    9085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:43.446464    9085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:43.946098    9085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:44.446452    9085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:44.454822    9085 api_server.go:72] duration metric: took 1.510517333s to wait for apiserver process to appear ...
	I0805 10:38:44.454834    9085 api_server.go:88] waiting for apiserver healthz status ...
	I0805 10:38:44.454843    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:45.649130    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:45.649150    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:49.456848    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:49.456869    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:50.649480    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:50.649510    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:54.457039    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:54.457094    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:55.649994    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:55.650056    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:59.457509    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:59.457530    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:00.650862    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:00.650912    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:04.457908    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:04.458017    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:05.652014    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:05.652083    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:09.459155    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:09.459199    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:10.653609    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:10.653657    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:14.460133    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:14.460189    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:15.655355    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:15.655392    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:19.461488    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:19.461528    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:20.657569    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:20.657608    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:24.463021    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:24.463074    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:25.659580    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:25.659601    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:29.465112    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:29.465159    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:30.661756    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:30.661925    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:30.681825    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:39:30.681931    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:30.696788    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:39:30.696874    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:30.709359    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:39:30.709431    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:30.720025    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:39:30.720098    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:30.730363    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:39:30.730445    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:30.745329    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:39:30.745396    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:30.756846    9068 logs.go:276] 0 containers: []
	W0805 10:39:30.756857    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:30.756917    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:30.767247    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:39:30.767264    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:30.767269    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:30.873738    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:39:30.873749    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:39:30.884931    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:30.884941    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:30.909323    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:39:30.909335    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:39:30.926533    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:39:30.926546    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:39:30.941507    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:39:30.941517    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:30.954116    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:39:30.954130    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:39:30.968363    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:39:30.968378    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:39:30.981650    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:39:30.981665    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:39:30.993651    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:39:30.993667    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:39:31.005359    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:39:31.005372    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:39:31.016987    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:39:31.017001    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:39:31.028500    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:31.028514    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:31.066912    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:31.066920    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:31.071142    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:39:31.071148    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:39:31.113145    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:39:31.113157    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:39:31.132599    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:39:31.132611    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:39:33.647166    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:34.466032    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:34.466084    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:38.647915    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:38.648236    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:38.681080    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:39:38.681202    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:38.700592    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:39:38.700695    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:38.714786    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:39:38.714858    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:38.726938    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:39:38.727017    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:38.737871    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:39:38.737936    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:38.748246    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:39:38.748312    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:38.758662    9068 logs.go:276] 0 containers: []
	W0805 10:39:38.758675    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:38.758726    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:38.769092    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:39:38.769110    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:39:38.769116    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:39:38.780862    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:39:38.780872    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:39:38.798158    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:39:38.798169    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:39:38.809973    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:39:38.809985    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:39:38.821623    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:39:38.821638    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:39:38.836141    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:39:38.836152    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:39:38.849931    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:39:38.849945    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:39:38.861428    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:38.861441    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:38.865822    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:39:38.865831    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:39:38.902527    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:39:38.902539    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:39:38.924563    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:39:38.924576    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:39:38.939044    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:38.939058    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:38.964878    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:38.964887    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:39.003525    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:39:39.003540    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:39:39.018760    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:39:39.018772    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:39:39.030689    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:39:39.030704    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:39.042847    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:39.042859    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:39.468363    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:39.468385    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:41.579427    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:44.470567    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:44.470819    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:44.485458    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:39:44.485551    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:44.496211    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:39:44.496280    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:44.506037    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:39:44.506101    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:44.516583    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:39:44.516656    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:44.526774    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:39:44.526844    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:44.536887    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:39:44.536956    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:44.546765    9085 logs.go:276] 0 containers: []
	W0805 10:39:44.546775    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:44.546828    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:44.556919    9085 logs.go:276] 0 containers: []
	W0805 10:39:44.556930    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:39:44.556938    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:39:44.556943    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:39:44.572860    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:39:44.572875    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:39:44.584584    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:39:44.584600    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:39:44.595660    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:39:44.595672    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:39:44.613183    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:39:44.613196    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:39:44.624553    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:39:44.624562    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:39:44.635805    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:39:44.635815    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:39:44.648288    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:44.648298    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:44.673281    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:44.673288    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:44.771964    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:39:44.771981    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:39:44.786706    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:39:44.786716    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:39:44.801140    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:39:44.801149    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:39:44.818222    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:44.818236    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:44.860081    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:39:44.860091    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:44.871515    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:44.871528    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:46.581786    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:46.582042    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:46.608870    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:39:46.608999    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:46.625694    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:39:46.625784    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:46.643063    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:39:46.643142    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:46.654743    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:39:46.654811    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:46.665925    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:39:46.665994    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:46.679628    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:39:46.679698    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:46.690505    9068 logs.go:276] 0 containers: []
	W0805 10:39:46.690517    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:46.690572    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:46.700944    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:39:46.700963    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:46.700970    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:46.742158    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:39:46.742169    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:39:46.756928    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:39:46.756940    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:39:46.767948    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:39:46.767960    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:46.780970    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:46.780984    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:46.804852    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:46.804860    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:46.842792    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:46.842800    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:46.847418    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:39:46.847428    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:39:46.859399    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:39:46.859412    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:39:46.873136    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:39:46.873145    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:39:46.886742    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:39:46.886753    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:39:46.926986    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:39:46.926997    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:39:46.941111    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:39:46.941122    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:39:46.952093    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:39:46.952103    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:39:46.969768    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:39:46.969782    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:39:46.984270    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:39:46.984281    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:39:46.995637    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:39:46.995646    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:39:49.509256    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:47.378002    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:54.511611    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:54.511756    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:54.522835    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:39:54.522905    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:54.533595    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:39:54.533663    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:54.544165    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:39:54.544237    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:54.554823    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:39:54.554899    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:54.564974    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:39:54.565053    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:54.576694    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:39:54.576770    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:54.586668    9068 logs.go:276] 0 containers: []
	W0805 10:39:54.586683    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:54.586742    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:54.597007    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:39:54.597025    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:54.597031    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:54.601219    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:39:54.601225    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:39:54.615271    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:39:54.615282    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:39:54.627097    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:39:54.627107    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:39:54.638953    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:39:54.638965    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:54.650961    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:54.650972    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:54.689978    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:54.689987    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:54.726888    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:39:54.726899    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:39:54.738431    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:39:54.738444    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:39:54.749682    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:39:54.749693    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:39:54.768612    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:54.768626    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:54.791618    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:39:54.791625    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:39:54.829917    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:39:54.829929    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:39:54.843491    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:39:54.843503    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:39:54.857576    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:39:54.857590    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:39:54.869249    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:39:54.869262    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:39:54.892369    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:39:54.892382    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:39:52.379743    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:52.379921    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:52.400662    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:39:52.400740    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:52.417344    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:39:52.417408    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:52.428661    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:39:52.428731    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:52.438942    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:39:52.439039    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:52.448925    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:39:52.448982    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:52.459571    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:39:52.459639    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:52.470534    9085 logs.go:276] 0 containers: []
	W0805 10:39:52.470547    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:52.470608    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:52.481836    9085 logs.go:276] 0 containers: []
	W0805 10:39:52.481847    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:39:52.481855    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:39:52.481860    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:39:52.495655    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:39:52.495665    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:39:52.508654    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:39:52.508667    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:39:52.526136    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:39:52.526146    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:39:52.543887    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:52.543898    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:52.570164    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:52.570174    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:52.606754    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:39:52.606767    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:39:52.621347    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:39:52.621357    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:39:52.634010    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:39:52.634024    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:39:52.645930    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:52.645942    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:52.686952    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:52.686961    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:52.691024    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:39:52.691033    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:52.703343    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:39:52.703358    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:39:52.715711    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:39:52.715723    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:39:52.730649    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:39:52.730659    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:39:55.243052    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:57.407375    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:00.245339    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:00.245543    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:00.263694    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:00.263784    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:00.277375    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:00.277456    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:00.289079    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:00.289152    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:00.299590    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:00.299659    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:00.310291    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:00.310359    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:00.321683    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:00.321754    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:00.335251    9085 logs.go:276] 0 containers: []
	W0805 10:40:00.335263    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:00.335325    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:00.348994    9085 logs.go:276] 0 containers: []
	W0805 10:40:00.349004    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:00.349012    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:00.349017    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:00.363157    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:00.363166    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:00.389336    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:00.389348    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:00.400324    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:00.400335    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:00.418041    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:00.418054    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:00.459506    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:00.459517    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:00.463870    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:00.463876    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:00.478284    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:00.478300    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:00.494986    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:00.494998    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:00.508978    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:00.508990    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:00.520716    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:00.520731    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:00.556980    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:00.556994    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:00.568366    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:00.568382    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:00.582234    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:00.582245    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:00.593545    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:00.593555    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:02.409674    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:02.410040    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:02.443504    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:02.443627    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:02.473566    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:02.473645    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:02.487656    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:02.487730    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:02.503056    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:02.503121    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:02.513936    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:02.514003    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:02.527874    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:02.527942    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:02.538024    9068 logs.go:276] 0 containers: []
	W0805 10:40:02.538036    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:02.538091    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:02.548611    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:02.548628    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:02.548634    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:02.560085    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:02.560096    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:02.574195    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:02.574207    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:02.586084    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:02.586096    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:02.590431    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:02.590441    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:02.625374    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:02.625385    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:02.673172    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:02.673184    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:02.684456    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:02.684467    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:02.697498    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:02.697510    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:02.721980    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:02.721987    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:02.759699    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:02.759708    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:02.773599    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:02.773611    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:02.787186    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:02.787196    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:02.799095    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:02.799106    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:02.811061    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:02.811073    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:02.828849    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:02.828862    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:02.843000    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:02.843011    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:05.357052    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:03.108131    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:10.359813    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:10.360066    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:10.393464    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:10.393595    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:10.416016    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:10.416119    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:10.433459    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:10.433558    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:10.447268    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:10.447348    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:10.458102    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:10.458171    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:10.468432    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:10.468501    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:10.478311    9068 logs.go:276] 0 containers: []
	W0805 10:40:10.478331    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:10.478390    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:10.494842    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:10.494862    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:10.494868    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:10.532123    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:10.532134    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:10.543523    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:10.543534    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:10.547995    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:10.548002    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:08.110466    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:08.110880    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:08.147858    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:08.147980    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:08.168511    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:08.168612    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:08.182428    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:08.182505    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:08.196223    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:08.196295    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:08.209363    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:08.209429    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:08.219493    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:08.219578    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:08.235748    9085 logs.go:276] 0 containers: []
	W0805 10:40:08.235763    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:08.235822    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:08.245814    9085 logs.go:276] 0 containers: []
	W0805 10:40:08.245824    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:08.245834    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:08.245840    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:08.261579    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:08.261589    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:08.272427    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:08.272437    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:08.283622    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:08.283635    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:08.287811    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:08.287820    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:08.322103    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:08.322115    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:08.334324    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:08.334337    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:08.360396    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:08.360404    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:08.401398    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:08.401405    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:08.417327    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:08.417337    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:08.431763    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:08.431777    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:08.450713    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:08.450723    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:08.462358    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:08.462369    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:08.478273    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:08.478286    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:08.490258    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:08.490271    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:11.004834    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:10.585609    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:10.585620    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:10.600023    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:10.600036    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:10.611764    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:10.611775    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:10.624428    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:10.624440    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:10.641661    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:10.641671    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:10.654059    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:10.654071    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:10.669507    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:10.669523    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:10.681265    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:10.681277    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:10.705641    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:10.705650    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:10.743377    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:10.743384    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:10.757094    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:10.757109    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:10.772784    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:10.772801    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:10.784053    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:10.784064    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:13.297449    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:16.007043    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:16.007281    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:16.027012    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:16.027104    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:16.040984    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:16.041063    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:16.052596    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:16.052669    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:16.063289    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:16.063351    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:16.073900    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:16.073971    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:16.085037    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:16.085114    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:16.096938    9085 logs.go:276] 0 containers: []
	W0805 10:40:16.096949    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:16.097009    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:16.107166    9085 logs.go:276] 0 containers: []
	W0805 10:40:16.107177    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:16.107184    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:16.107189    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:16.119132    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:16.119143    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:16.123463    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:16.123472    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:16.157987    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:16.157999    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:16.172465    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:16.172477    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:16.186116    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:16.186126    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:16.197418    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:16.197428    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:16.211766    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:16.211776    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:16.222710    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:16.222721    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:16.234113    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:16.234124    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:16.274154    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:16.274163    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:16.285023    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:16.285035    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:16.300809    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:16.300818    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:16.312245    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:16.312257    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:16.329149    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:16.329163    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:18.299741    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:18.299949    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:18.329441    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:18.329557    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:18.346858    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:18.346946    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:18.360386    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:18.360463    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:18.371922    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:18.371997    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:18.383208    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:18.383276    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:18.394082    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:18.394156    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:18.404922    9068 logs.go:276] 0 containers: []
	W0805 10:40:18.404935    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:18.405001    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:18.420326    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:18.420342    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:18.420349    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:18.432096    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:18.432107    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:18.443821    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:18.443832    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:18.461334    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:18.461344    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:18.475821    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:18.475832    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:18.487670    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:18.487681    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:18.501506    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:18.501520    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:18.540890    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:18.540913    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:18.549708    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:18.549721    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:18.562598    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:18.562609    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:18.600141    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:18.600152    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:18.615033    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:18.615047    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:18.626695    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:18.626708    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:18.644176    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:18.644187    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:18.656062    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:18.656075    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:18.679659    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:18.679670    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:18.691566    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:18.691579    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:18.856828    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:21.232731    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:23.857821    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:23.858145    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:23.892308    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:23.892454    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:23.913557    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:23.913675    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:23.928069    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:23.928147    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:23.940731    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:23.940817    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:23.951656    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:23.951728    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:23.962753    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:23.962823    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:23.973338    9085 logs.go:276] 0 containers: []
	W0805 10:40:23.973349    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:23.973420    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:23.984104    9085 logs.go:276] 0 containers: []
	W0805 10:40:23.984114    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:23.984120    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:23.984126    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:23.998745    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:23.998759    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:24.012448    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:24.012461    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:24.026945    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:24.026957    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:24.044372    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:24.044385    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:24.069716    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:24.069725    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:24.083947    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:24.083962    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:24.088324    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:24.088330    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:24.125794    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:24.125808    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:24.137986    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:24.137998    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:24.149514    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:24.149527    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:24.161238    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:24.161252    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:24.203501    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:24.203510    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:24.215266    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:24.215279    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:24.226794    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:24.226807    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:26.739858    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:26.233491    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:26.233834    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:26.273617    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:26.273754    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:26.296267    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:26.296376    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:26.312017    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:26.312097    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:26.328392    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:26.328465    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:26.344458    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:26.344527    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:26.355536    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:26.355607    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:26.368150    9068 logs.go:276] 0 containers: []
	W0805 10:40:26.368163    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:26.368224    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:26.379197    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:26.379218    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:26.379223    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:26.398070    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:26.398079    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:26.436386    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:26.436397    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:26.448883    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:26.448896    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:26.461101    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:26.461112    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:26.485798    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:26.485808    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:26.499822    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:26.499833    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:26.511490    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:26.511502    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:26.523367    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:26.523378    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:26.560330    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:26.560343    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:26.576700    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:26.576713    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:26.590753    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:26.590764    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:26.602550    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:26.602562    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:26.623310    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:26.623324    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:26.660540    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:26.660552    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:26.664525    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:26.664533    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:26.676804    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:26.676815    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:29.192049    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:31.742018    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:31.742157    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:31.755942    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:31.756025    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:31.767834    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:31.767901    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:31.781056    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:31.781129    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:31.792257    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:31.792325    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:31.802197    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:31.802275    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:31.812919    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:31.812991    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:31.827283    9085 logs.go:276] 0 containers: []
	W0805 10:40:31.827294    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:31.827353    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:31.837331    9085 logs.go:276] 0 containers: []
	W0805 10:40:31.837343    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:31.837351    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:31.837357    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:31.851140    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:31.851152    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:31.877406    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:31.877418    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:31.889609    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:31.889620    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:31.931964    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:31.931976    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:31.943731    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:31.943743    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:31.961153    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:31.961163    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:31.972929    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:31.972943    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:31.978137    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:31.978145    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:32.013793    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:32.013805    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:32.027730    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:32.027741    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:32.051564    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:32.051575    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:32.062704    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:32.062719    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:32.074123    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:32.074136    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:32.090629    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:32.090640    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:34.194562    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:34.194851    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:34.223048    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:34.223174    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:34.240220    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:34.240308    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:34.253445    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:34.253513    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:34.265416    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:34.265482    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:34.275991    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:34.276058    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:34.286304    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:34.286375    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:34.296623    9068 logs.go:276] 0 containers: []
	W0805 10:40:34.296634    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:34.296685    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:34.306970    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:34.306989    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:34.306995    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:34.324922    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:34.324934    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:34.336758    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:34.336769    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:34.349011    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:34.349023    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:34.353223    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:34.353231    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:34.387939    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:34.387949    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:34.399418    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:34.399428    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:34.413862    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:34.413873    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:34.427944    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:34.427956    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:34.442122    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:34.442132    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:34.454200    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:34.454212    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:34.466351    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:34.466360    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:34.478074    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:34.478083    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:34.496225    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:34.496235    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:34.507572    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:34.507583    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:34.543303    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:34.543311    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:34.580285    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:34.580300    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:34.603726    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:37.105244    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:39.605899    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:39.606192    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:39.633119    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:39.633252    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:39.650611    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:39.650697    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:39.664121    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:39.664201    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:39.676067    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:39.676134    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:39.686647    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:39.686721    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:39.697914    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:39.697984    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:39.708263    9085 logs.go:276] 0 containers: []
	W0805 10:40:39.708276    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:39.708344    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:39.718132    9085 logs.go:276] 0 containers: []
	W0805 10:40:39.718144    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:39.718152    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:39.718158    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:39.732482    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:39.732495    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:39.743631    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:39.743644    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:39.755944    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:39.755954    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:39.767733    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:39.767743    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:39.784884    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:39.784895    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:39.802821    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:39.802833    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:39.818505    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:39.818515    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:39.830125    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:39.830136    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:39.841355    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:39.841365    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:39.867789    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:39.867798    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:39.910122    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:39.910132    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:39.915072    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:39.915078    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:39.930532    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:39.930543    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:39.942072    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:39.942084    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:42.107531    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:42.107811    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:42.132407    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:42.132520    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:42.149177    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:42.149266    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:42.162450    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:42.162515    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:42.173997    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:42.174060    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:42.184688    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:42.184758    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:42.195409    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:42.195470    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:42.205801    9068 logs.go:276] 0 containers: []
	W0805 10:40:42.205816    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:42.205878    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:42.216203    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:42.216218    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:42.216224    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:42.240659    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:42.240671    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:42.255213    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:42.255232    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:42.267319    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:42.267331    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:42.284221    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:42.284235    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:42.297812    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:42.297822    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:42.309774    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:42.309785    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:42.320785    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:42.320796    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:42.359812    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:42.359821    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:42.374013    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:42.374026    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:42.386940    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:42.386951    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:42.398816    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:42.398827    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:42.402902    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:42.402909    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:42.417547    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:42.417557    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:42.455339    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:42.455351    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:42.466800    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:42.466812    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:42.504471    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:42.504481    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:45.021428    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:42.479461    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:50.023671    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:50.023838    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:50.038120    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:50.038201    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:50.049845    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:50.049906    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:50.063526    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:50.063605    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:50.073953    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:50.074015    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:50.084478    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:50.084550    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:50.095411    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:50.095472    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:50.105762    9068 logs.go:276] 0 containers: []
	W0805 10:40:50.105775    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:50.105835    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:50.116482    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:50.116501    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:50.116506    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:50.127729    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:50.127740    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:50.144918    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:50.144929    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:50.159207    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:50.159217    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:50.173342    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:50.173354    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:50.185252    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:50.185264    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:50.223925    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:50.223933    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:50.259851    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:50.259869    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:50.274701    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:50.274712    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:50.290412    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:50.290423    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:50.301586    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:50.301598    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:50.313589    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:50.313601    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:50.324779    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:50.324792    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:50.348056    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:50.348066    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:50.351710    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:50.351719    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:50.365342    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:50.365354    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:50.406517    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:50.406525    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:47.481655    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:47.481862    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:47.504917    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:47.505058    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:47.522047    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:47.522128    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:47.535945    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:47.536008    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:47.547048    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:47.547115    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:47.561977    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:47.562046    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:47.572335    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:47.572410    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:47.583407    9085 logs.go:276] 0 containers: []
	W0805 10:40:47.583420    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:47.583479    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:47.599656    9085 logs.go:276] 0 containers: []
	W0805 10:40:47.599669    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:47.599678    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:47.599683    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:47.611816    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:47.611830    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:47.652970    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:47.652979    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:47.657561    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:47.657570    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:47.674671    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:47.674683    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:47.689615    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:47.689626    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:47.702212    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:47.702225    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:47.714738    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:47.714751    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:47.729428    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:47.729440    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:47.767206    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:47.767217    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:47.778214    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:47.778226    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:47.817884    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:47.817899    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:47.829714    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:47.829729    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:47.848235    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:47.848246    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:47.859457    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:47.859470    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:50.386368    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:52.926854    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:55.388649    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:55.389100    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:55.432625    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:40:55.432776    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:55.453776    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:40:55.453888    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:55.468955    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:40:55.469034    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:55.488188    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:40:55.488255    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:55.498725    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:40:55.498795    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:55.509613    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:40:55.509678    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:55.520183    9085 logs.go:276] 0 containers: []
	W0805 10:40:55.520196    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:55.520252    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:55.530510    9085 logs.go:276] 0 containers: []
	W0805 10:40:55.530521    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:40:55.530529    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:55.530536    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:55.564262    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:40:55.564275    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:40:55.580242    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:40:55.580256    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:40:55.592448    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:55.592463    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:55.617644    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:40:55.617653    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:55.629546    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:40:55.629557    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:40:55.642000    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:40:55.642011    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:40:55.663172    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:40:55.663187    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:40:55.675037    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:40:55.675052    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:40:55.692294    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:40:55.692309    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:40:55.703624    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:55.703634    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:55.707830    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:40:55.707839    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:40:55.719884    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:40:55.719894    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:40:55.731174    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:55.731186    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:55.772852    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:40:55.772863    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:40:57.928479    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:57.928599    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:57.945867    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:57.945962    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:57.959336    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:57.959412    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:57.970803    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:57.970872    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:57.981555    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:57.981622    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:57.992242    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:57.992322    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:58.002966    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:58.003036    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:58.012677    9068 logs.go:276] 0 containers: []
	W0805 10:40:58.012694    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:58.012750    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:58.023070    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:58.023088    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:58.023094    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:58.037182    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:58.037193    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:58.074985    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:58.074994    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:58.086752    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:58.086765    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:58.122546    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:58.122553    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:58.136131    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:58.136142    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:58.148055    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:58.148068    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:58.160047    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:58.160059    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:58.178328    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:58.178339    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:58.189701    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:58.189715    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:58.194115    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:58.194123    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:58.205727    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:58.205735    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:58.228425    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:58.228440    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:58.241532    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:58.241544    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:58.277797    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:58.277811    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:58.292440    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:58.292448    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:58.306423    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:58.306431    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:58.289182    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:00.822779    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:03.291357    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:03.291608    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:03.315125    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:03.315270    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:03.330467    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:03.330548    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:03.347991    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:03.348062    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:03.358488    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:03.358560    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:03.372497    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:03.372559    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:03.382924    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:03.382983    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:03.392782    9085 logs.go:276] 0 containers: []
	W0805 10:41:03.392793    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:03.392844    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:03.403042    9085 logs.go:276] 0 containers: []
	W0805 10:41:03.403055    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:03.403063    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:03.403068    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:03.438822    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:03.438833    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:03.452874    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:03.452884    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:03.464649    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:03.464660    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:03.476372    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:03.476384    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:03.490343    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:03.490353    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:03.502251    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:03.502262    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:03.520538    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:03.520548    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:03.545759    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:03.545772    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:03.588056    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:03.588069    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:03.592577    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:03.592585    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:03.607364    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:03.607375    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:03.618916    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:03.618929    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:03.630901    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:03.630913    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:03.642770    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:03.642781    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:06.156217    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:05.824592    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:05.824985    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:05.857242    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:05.857376    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:05.883083    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:05.883166    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:05.898126    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:05.898191    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:05.909636    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:05.909709    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:05.920826    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:05.920895    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:05.933817    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:05.933887    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:05.944976    9068 logs.go:276] 0 containers: []
	W0805 10:41:05.944989    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:05.945049    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:05.956180    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:05.956204    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:05.956209    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:05.972517    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:05.972532    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:05.983805    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:05.983815    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:05.995843    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:05.995855    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:06.011924    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:06.011935    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:06.023564    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:06.023579    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:06.060067    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:06.060076    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:06.078216    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:06.078226    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:06.089656    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:06.089668    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:06.112269    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:06.112276    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:06.124456    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:06.124468    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:06.138557    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:06.138570    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:06.153924    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:06.153935    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:06.169961    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:06.169973    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:06.174436    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:06.174443    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:06.209915    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:06.209927    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:06.224179    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:06.224193    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:08.763690    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:11.157608    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:11.158043    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:11.194715    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:11.194836    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:11.212928    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:11.213022    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:11.226486    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:11.226562    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:11.237614    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:11.237680    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:11.248363    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:11.248430    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:11.258747    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:11.258815    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:11.269241    9085 logs.go:276] 0 containers: []
	W0805 10:41:11.269252    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:11.269306    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:11.279502    9085 logs.go:276] 0 containers: []
	W0805 10:41:11.279514    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:11.279522    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:11.279528    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:11.292296    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:11.292309    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:11.306917    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:11.306928    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:11.318785    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:11.318796    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:11.330437    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:11.330448    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:11.345470    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:11.345481    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:11.359927    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:11.359938    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:11.371693    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:11.371705    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:11.408226    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:11.408239    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:11.420204    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:11.420215    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:11.444083    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:11.444091    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:11.483698    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:11.483709    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:11.487965    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:11.487973    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:11.499166    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:11.499178    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:11.515078    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:11.515090    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:13.766002    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:13.766173    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:13.785905    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:13.786004    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:13.800817    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:13.800897    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:13.813021    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:13.813094    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:13.825659    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:13.825730    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:13.835800    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:13.835873    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:13.846064    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:13.846126    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:13.856572    9068 logs.go:276] 0 containers: []
	W0805 10:41:13.856587    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:13.856640    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:13.871200    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:13.871219    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:13.871225    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:13.904960    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:13.904972    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:13.942136    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:13.942153    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:13.955544    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:13.955562    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:13.968234    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:13.968246    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:13.979910    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:13.979921    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:14.003007    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:14.003019    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:14.007053    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:14.007063    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:14.020576    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:14.020586    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:14.034977    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:14.034985    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:14.046064    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:14.046080    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:14.057695    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:14.057708    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:14.072336    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:14.072346    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:14.087001    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:14.087012    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:14.098651    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:14.098663    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:14.136708    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:14.136716    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:14.147782    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:14.147794    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:14.033886    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:16.669434    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:19.036022    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:19.036174    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:19.063630    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:19.063719    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:19.081869    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:19.081945    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:19.092771    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:19.092839    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:19.103308    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:19.103383    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:19.119946    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:19.120025    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:19.130959    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:19.131020    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:19.141431    9085 logs.go:276] 0 containers: []
	W0805 10:41:19.141442    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:19.141494    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:19.152669    9085 logs.go:276] 0 containers: []
	W0805 10:41:19.152686    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:19.152694    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:19.152699    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:19.168348    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:19.168358    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:19.179942    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:19.179954    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:19.191113    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:19.191125    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:19.202690    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:19.202700    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:19.236521    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:19.236531    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:19.250212    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:19.250224    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:19.254860    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:19.254870    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:19.269233    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:19.269246    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:19.284722    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:19.284733    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:19.299061    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:19.299071    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:19.310441    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:19.310453    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:19.333827    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:19.333834    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:19.374813    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:19.374826    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:19.386578    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:19.386596    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:21.905421    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:21.671905    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:21.672106    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:21.694676    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:21.694781    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:21.709672    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:21.709753    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:21.722506    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:21.722581    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:21.733947    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:21.734019    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:21.744155    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:21.744227    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:21.754446    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:21.754520    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:21.765078    9068 logs.go:276] 0 containers: []
	W0805 10:41:21.765090    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:21.765146    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:21.775577    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:21.775596    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:21.775601    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:21.815199    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:21.815208    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:21.826681    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:21.826691    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:21.838344    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:21.838353    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:21.861629    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:21.861636    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:21.865670    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:21.865680    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:21.902501    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:21.902512    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:21.916574    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:21.916584    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:21.954328    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:21.954339    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:21.965404    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:21.965414    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:21.981344    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:21.981354    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:21.995093    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:21.995105    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:22.009568    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:22.009579    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:22.026765    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:22.026776    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:22.039373    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:22.039383    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:22.053635    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:22.053649    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:22.069131    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:22.069144    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:24.582378    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:26.907597    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:26.907856    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:26.933785    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:26.933923    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:26.951677    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:26.951768    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:26.964697    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:26.964772    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:26.976096    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:26.976158    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:26.986413    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:26.986483    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:26.999849    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:26.999918    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:27.011123    9085 logs.go:276] 0 containers: []
	W0805 10:41:27.011135    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:27.011190    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:27.020867    9085 logs.go:276] 0 containers: []
	W0805 10:41:27.020880    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:27.020887    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:27.020893    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:27.044786    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:27.044793    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:27.056702    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:27.056712    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:27.093277    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:27.093289    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:27.104317    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:27.104332    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:27.118340    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:27.118350    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:27.130268    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:27.130279    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:27.146394    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:27.146406    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:27.163524    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:27.163533    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:27.205858    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:27.205869    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:27.220701    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:27.220710    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:27.234899    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:27.234910    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:27.246378    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:27.246389    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:27.258018    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:27.258029    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:27.262800    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:27.262807    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:29.585002    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:29.585308    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:29.621044    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:29.621178    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:29.639330    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:29.639425    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:29.663110    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:29.663186    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:29.678500    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:29.678583    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:29.689165    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:29.689236    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:29.699715    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:29.699787    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:29.710442    9068 logs.go:276] 0 containers: []
	W0805 10:41:29.710456    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:29.710514    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:29.720854    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:29.720874    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:29.720880    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:29.757538    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:29.757554    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:29.769482    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:29.769495    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:29.793011    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:29.793022    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:29.810129    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:29.810139    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:29.822010    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:29.822021    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:29.834559    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:29.834571    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:29.845875    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:29.845887    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:29.859451    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:29.859462    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:29.873630    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:29.873641    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:29.885621    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:29.885631    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:29.897621    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:29.897631    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:29.909563    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:29.909579    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:29.913536    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:29.913541    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:29.950275    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:29.950285    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:29.996432    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:29.996445    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:30.011592    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:30.011608    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:29.778448    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:32.540854    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:34.780721    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:34.781284    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:34.820296    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:34.820439    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:34.842244    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:34.842354    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:34.858219    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:34.858300    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:34.870902    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:34.870977    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:34.885243    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:34.885308    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:34.895891    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:34.895959    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:34.906290    9085 logs.go:276] 0 containers: []
	W0805 10:41:34.906303    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:34.906369    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:34.916614    9085 logs.go:276] 0 containers: []
	W0805 10:41:34.916626    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:34.916634    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:34.916640    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:34.941068    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:34.941081    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:34.955213    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:34.955225    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:34.968446    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:34.968457    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:34.982785    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:34.982794    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:34.997677    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:34.997687    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:35.011174    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:35.011185    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:35.015662    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:35.015669    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:35.027197    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:35.027208    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:35.038914    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:35.038925    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:35.055877    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:35.055887    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:35.067275    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:35.067285    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:35.108726    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:35.108735    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:35.149605    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:35.149617    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:35.162174    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:35.162188    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:37.543686    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:37.543898    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:37.563363    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:37.563455    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:37.580362    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:37.580438    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:37.592548    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:37.592615    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:37.603311    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:37.603380    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:37.614022    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:37.614086    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:37.624717    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:37.624788    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:37.635114    9068 logs.go:276] 0 containers: []
	W0805 10:41:37.635125    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:37.635177    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:37.645500    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:37.645517    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:37.645523    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:37.684683    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:37.684693    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:37.698277    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:37.698294    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:37.716071    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:37.716084    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:37.727146    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:37.727157    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:37.750685    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:37.750695    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:37.762654    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:37.762669    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:37.800030    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:37.800037    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:37.835253    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:37.835265    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:37.849979    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:37.849990    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:37.861450    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:37.861462    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:37.873609    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:37.873624    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:37.885831    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:37.885843    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:37.897786    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:37.897803    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:37.901988    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:37.901994    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:37.915987    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:37.916002    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:37.928149    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:37.928160    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:40.444227    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:37.674644    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:45.446827    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:45.447131    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:45.474366    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:45.474489    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:45.494231    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:45.494326    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:45.511464    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:45.511539    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:45.522605    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:45.522680    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:45.533123    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:45.533186    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:45.543454    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:45.543516    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:42.676487    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:42.676761    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:42.700636    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:42.700757    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:42.716698    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:42.716779    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:42.730014    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:42.730087    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:42.741088    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:42.741166    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:42.751046    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:42.751109    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:42.761798    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:42.761861    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:42.779122    9085 logs.go:276] 0 containers: []
	W0805 10:41:42.779133    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:42.779209    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:42.788975    9085 logs.go:276] 0 containers: []
	W0805 10:41:42.788986    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:42.788994    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:42.788999    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:42.793900    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:42.793906    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:42.808093    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:42.808103    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:42.850142    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:42.850150    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:42.884430    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:42.884441    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:42.903208    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:42.903218    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:42.914477    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:42.914490    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:42.938689    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:42.938697    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:42.950733    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:42.950744    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:42.963494    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:42.963505    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:42.974827    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:42.974837    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:42.992661    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:42.992674    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:43.007012    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:43.007025    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:43.020574    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:43.020585    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:43.033482    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:43.033493    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:45.546628    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:45.553904    9068 logs.go:276] 0 containers: []
	W0805 10:41:45.553915    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:45.553965    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:45.563986    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:45.564005    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:45.564010    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:45.600725    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:45.600735    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:45.604913    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:45.604920    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:45.642113    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:45.642123    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:45.656574    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:45.656589    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:45.673263    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:45.673275    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:45.685278    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:45.685292    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:45.697129    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:45.697144    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:45.708607    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:45.708622    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:45.742673    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:45.742683    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:45.760176    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:45.760190    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:45.773987    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:45.773998    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:45.784934    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:45.784947    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:45.799943    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:45.799959    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:45.813651    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:45.813662    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:45.825150    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:45.825161    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:45.848146    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:45.848154    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:48.361479    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:50.548822    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:50.549238    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:50.586665    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:50.586808    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:50.607106    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:50.607210    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:50.622256    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:50.622332    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:50.637906    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:50.637982    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:50.652847    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:50.652918    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:50.663847    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:50.663920    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:50.680010    9085 logs.go:276] 0 containers: []
	W0805 10:41:50.680023    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:50.680083    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:50.690856    9085 logs.go:276] 0 containers: []
	W0805 10:41:50.690870    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:50.690878    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:50.690884    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:50.724285    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:50.724299    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:50.740777    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:50.740789    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:50.752965    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:50.752978    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:50.776623    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:50.776631    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:50.790787    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:50.790803    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:50.795148    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:50.795155    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:50.809745    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:50.809756    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:50.824378    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:50.824388    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:50.836986    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:50.836998    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:50.854677    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:50.854687    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:50.866286    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:50.866298    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:50.909072    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:50.909083    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:41:50.920504    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:50.920515    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:50.932298    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:50.932311    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:53.363680    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:53.363771    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:53.374956    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:53.375030    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:53.385587    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:53.385658    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:53.398052    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:53.398122    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:53.408531    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:53.408599    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:53.419492    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:53.419564    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:53.430317    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:53.430384    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:53.440087    9068 logs.go:276] 0 containers: []
	W0805 10:41:53.440100    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:53.440163    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:53.450562    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:53.450578    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:53.450585    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:53.468003    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:53.468016    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:53.479598    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:53.479609    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:53.493710    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:53.493722    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:53.515417    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:53.515424    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:53.519838    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:53.519846    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:53.535778    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:53.535789    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:53.547080    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:53.547092    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:53.583631    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:53.583642    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:53.619326    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:53.619342    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:53.660987    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:53.660999    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:53.675506    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:53.675518    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:53.686995    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:53.687007    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:53.704717    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:53.704731    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:53.721109    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:53.721123    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:53.735045    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:53.735059    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:53.747160    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:53.747172    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:53.447967    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:56.260024    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:58.450126    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:58.450305    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:58.463438    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:41:58.463520    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:58.474811    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:41:58.474877    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:58.485368    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:41:58.485437    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:58.499430    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:41:58.499501    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:58.510279    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:41:58.510351    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:58.521484    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:41:58.521553    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:58.532261    9085 logs.go:276] 0 containers: []
	W0805 10:41:58.532276    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:58.532332    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:58.542422    9085 logs.go:276] 0 containers: []
	W0805 10:41:58.542439    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:41:58.542446    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:41:58.542452    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:58.553875    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:41:58.553886    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:41:58.565448    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:41:58.565460    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:41:58.577996    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:41:58.578012    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:41:58.590006    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:58.590016    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:58.613435    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:58.613443    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:58.654065    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:41:58.654080    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:41:58.673238    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:41:58.673249    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:41:58.685391    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:58.685402    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:58.689845    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:41:58.689853    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:41:58.701551    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:41:58.701563    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:41:58.715629    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:41:58.715641    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:41:58.730904    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:41:58.730915    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:41:58.754792    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:58.754801    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:58.788312    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:41:58.788329    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:01.302728    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:01.262366    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:01.262548    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:01.278005    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:42:01.278077    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:01.298003    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:42:01.298076    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:01.314326    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:42:01.314391    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:01.324954    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:42:01.325027    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:01.335312    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:42:01.335376    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:01.350366    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:42:01.350439    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:01.360957    9068 logs.go:276] 0 containers: []
	W0805 10:42:01.360970    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:01.361033    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:01.371440    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:42:01.371460    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:42:01.371465    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:42:01.385308    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:42:01.385322    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:42:01.402452    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:42:01.402464    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:42:01.414446    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:01.414459    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:01.450825    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:42:01.450839    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:42:01.462465    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:42:01.462475    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:42:01.474464    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:01.474476    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:01.496509    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:42:01.496516    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:01.508176    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:01.508187    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:01.512475    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:42:01.512484    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:42:01.553396    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:42:01.553410    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:42:01.567009    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:01.567020    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:01.607119    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:42:01.607129    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:42:01.621588    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:42:01.621598    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:42:01.635910    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:42:01.635921    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:42:01.648145    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:42:01.648155    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:42:01.663393    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:42:01.663407    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:42:04.177423    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:06.304196    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:06.304757    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:06.337599    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:42:06.337750    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:06.363456    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:42:06.363539    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:06.377315    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:42:06.377391    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:06.394301    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:42:06.394374    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:06.411286    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:42:06.411360    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:06.421766    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:42:06.421834    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:06.435216    9085 logs.go:276] 0 containers: []
	W0805 10:42:06.435227    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:06.435285    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:06.447657    9085 logs.go:276] 0 containers: []
	W0805 10:42:06.447669    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:42:06.447676    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:06.447747    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:06.490719    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:06.490729    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:06.495083    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:42:06.495092    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:42:06.509805    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:42:06.509821    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:06.521763    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:42:06.521775    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:42:06.535923    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:42:06.535938    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:42:06.549956    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:42:06.549965    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:42:06.561592    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:42:06.561603    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:42:06.573776    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:42:06.573787    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:42:06.593803    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:06.593816    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:06.618567    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:42:06.618575    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:06.633259    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:42:06.633269    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:42:06.644038    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:42:06.644050    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:42:06.655605    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:06.655615    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:06.690364    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:42:06.690376    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:42:09.179787    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:09.179968    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:09.201650    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:42:09.201732    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:09.214758    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:42:09.214829    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:09.227231    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:42:09.227296    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:09.238125    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:42:09.238190    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:09.248795    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:42:09.248856    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:09.259897    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:42:09.259967    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:09.271044    9068 logs.go:276] 0 containers: []
	W0805 10:42:09.271056    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:09.271108    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:09.281905    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:42:09.281928    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:42:09.281934    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:42:09.303172    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:42:09.303183    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:42:09.314408    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:09.314423    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:09.352880    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:09.352888    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:09.390133    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:42:09.390148    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:42:09.428286    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:42:09.428297    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:42:09.448775    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:42:09.448790    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:42:09.460827    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:42:09.460838    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:42:09.472767    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:09.472777    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:09.495851    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:09.495860    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:09.500003    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:42:09.500010    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:42:09.517044    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:42:09.517054    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:42:09.528358    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:42:09.528371    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:42:09.543900    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:42:09.543910    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:42:09.556977    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:42:09.556993    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:42:09.568509    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:42:09.568525    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:42:09.584145    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:42:09.584156    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:09.202031    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:12.098283    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:14.204249    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:14.204682    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:14.242114    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:42:14.242266    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:14.262673    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:42:14.262769    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:14.277114    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:42:14.277215    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:14.289247    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:42:14.289315    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:14.300675    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:42:14.300735    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:14.312755    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:42:14.312823    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:14.323690    9085 logs.go:276] 0 containers: []
	W0805 10:42:14.323702    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:14.323750    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:14.334330    9085 logs.go:276] 0 containers: []
	W0805 10:42:14.334342    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:42:14.334349    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:42:14.334356    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:42:14.349249    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:14.349261    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:14.372161    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:42:14.372169    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:14.384866    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:42:14.384878    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:14.397281    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:42:14.397292    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:42:14.411741    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:42:14.411757    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:42:14.423756    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:42:14.423767    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:42:14.435560    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:42:14.435571    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:42:14.447374    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:14.447386    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:14.451837    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:14.451844    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:14.485931    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:42:14.485944    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:42:14.504044    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:42:14.504056    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:42:14.526145    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:42:14.526157    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:42:14.537157    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:14.537170    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:14.579565    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:42:14.579577    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:42:17.094606    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:17.101073    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:17.101296    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:17.132219    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:42:17.132340    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:17.151936    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:42:17.152020    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:17.170562    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:42:17.170633    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:17.185915    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:42:17.185975    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:17.195713    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:42:17.195777    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:17.205885    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:42:17.205958    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:17.215753    9068 logs.go:276] 0 containers: []
	W0805 10:42:17.215764    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:17.215818    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:17.226633    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:42:17.226649    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:42:17.226654    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:42:17.241383    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:42:17.241398    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:42:17.258508    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:42:17.258517    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:42:17.269571    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:17.269587    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:17.273444    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:42:17.273451    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:42:17.311008    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:42:17.311018    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:42:17.322007    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:17.322021    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:17.360243    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:42:17.360252    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:42:17.373466    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:42:17.373476    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:42:17.387706    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:42:17.387721    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:42:17.403478    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:42:17.403492    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:42:17.431676    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:42:17.431692    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:42:17.453527    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:42:17.453544    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:17.466521    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:17.466534    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:17.502355    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:42:17.502370    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:42:17.513669    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:42:17.513681    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:42:17.528158    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:17.528170    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:20.051796    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:22.097170    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:22.097452    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:22.125947    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:42:22.126066    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:22.143084    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:42:22.143173    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:22.155898    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:42:22.155977    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:22.167996    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:42:22.168065    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:22.178394    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:42:22.178460    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:22.188881    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:42:22.188950    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:22.199345    9085 logs.go:276] 0 containers: []
	W0805 10:42:22.199358    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:22.199411    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:22.209005    9085 logs.go:276] 0 containers: []
	W0805 10:42:22.209016    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:42:22.209022    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:22.209028    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:22.233623    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:42:22.233631    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:22.245309    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:42:22.245323    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:42:22.256947    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:42:22.256958    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:22.268605    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:42:22.268617    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:42:22.287059    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:42:22.287068    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:42:22.302881    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:22.302895    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:22.307354    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:42:22.307360    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:42:22.321437    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:42:22.321451    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:42:22.333056    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:42:22.333067    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:42:25.054148    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:25.054276    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:25.069170    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:42:25.069256    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:25.081747    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:42:25.081820    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:25.092182    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:42:25.092247    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:25.102710    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:42:25.102780    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:25.113347    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:42:25.113414    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:25.123509    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:42:25.123579    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:25.133340    9068 logs.go:276] 0 containers: []
	W0805 10:42:25.133351    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:25.133400    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:25.148514    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:42:25.148532    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:42:25.148540    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:42:25.168474    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:42:25.168485    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:42:25.182467    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:25.182478    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:25.187226    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:25.187236    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:25.223348    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:42:25.223361    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:42:25.238385    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:42:25.238396    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:42:25.251677    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:42:25.251686    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:42:25.263633    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:42:25.263646    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:25.276220    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:42:25.276230    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:42:25.290204    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:42:25.290214    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:42:25.301990    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:25.302005    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:25.323299    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:42:25.323307    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:42:25.335352    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:42:25.335362    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:42:25.347290    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:42:25.347300    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:42:25.359501    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:25.359512    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:25.396202    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:42:25.396210    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:42:25.434367    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:42:25.434378    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:42:22.345037    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:42:22.345047    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:42:22.363704    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:42:22.363718    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:42:22.377912    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:22.377922    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:22.420613    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:22.420628    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:22.459913    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:42:22.459924    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:42:24.976538    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:27.947796    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:29.976937    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:29.977110    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:29.993807    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:42:29.993898    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:30.005794    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:42:30.005857    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:30.021189    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:42:30.021258    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:30.031796    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:42:30.031875    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:30.047559    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:42:30.047632    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:30.057959    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:42:30.058030    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:30.067754    9085 logs.go:276] 0 containers: []
	W0805 10:42:30.067765    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:30.067824    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:30.077617    9085 logs.go:276] 0 containers: []
	W0805 10:42:30.077630    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:42:30.077638    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:30.077644    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:30.111710    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:42:30.111721    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:42:30.126150    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:42:30.126161    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:42:30.138007    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:42:30.138017    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:42:30.156162    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:30.156173    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:30.179298    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:42:30.179306    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:42:30.193329    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:42:30.193339    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:30.205924    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:42:30.205936    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:42:30.220204    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:42:30.220214    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:42:30.231932    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:42:30.231941    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:42:30.243323    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:42:30.243334    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:30.255690    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:30.255705    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:30.260134    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:42:30.260141    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:42:30.271497    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:30.271508    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:30.312798    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:42:30.312807    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:42:32.950264    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:32.950396    9068 kubeadm.go:597] duration metric: took 4m4.011681625s to restartPrimaryControlPlane
	W0805 10:42:32.950538    9068 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 10:42:32.950600    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 10:42:34.019848    9068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.069248334s)
	I0805 10:42:34.019925    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 10:42:34.024918    9068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 10:42:34.027655    9068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 10:42:34.030554    9068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 10:42:34.030559    9068 kubeadm.go:157] found existing configuration files:
	
	I0805 10:42:34.030578    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/admin.conf
	I0805 10:42:34.033339    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 10:42:34.033359    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 10:42:34.036034    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/kubelet.conf
	I0805 10:42:34.038691    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 10:42:34.038715    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 10:42:34.042094    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/controller-manager.conf
	I0805 10:42:34.044763    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 10:42:34.044787    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 10:42:34.047199    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/scheduler.conf
	I0805 10:42:34.050277    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 10:42:34.050300    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 10:42:34.053059    9068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 10:42:34.070160    9068 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 10:42:34.070188    9068 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 10:42:34.125465    9068 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 10:42:34.125546    9068 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 10:42:34.125597    9068 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 10:42:34.174357    9068 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 10:42:34.177622    9068 out.go:204]   - Generating certificates and keys ...
	I0805 10:42:34.177654    9068 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 10:42:34.177685    9068 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 10:42:34.177794    9068 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 10:42:34.177860    9068 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 10:42:34.178023    9068 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 10:42:34.178105    9068 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 10:42:34.178153    9068 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 10:42:34.178230    9068 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 10:42:34.178308    9068 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 10:42:34.178350    9068 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 10:42:34.178440    9068 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 10:42:34.178513    9068 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 10:42:34.275839    9068 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 10:42:34.388908    9068 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 10:42:34.603280    9068 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 10:42:34.650264    9068 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 10:42:34.683183    9068 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 10:42:34.684663    9068 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 10:42:34.684692    9068 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 10:42:34.767116    9068 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 10:42:34.773292    9068 out.go:204]   - Booting up control plane ...
	I0805 10:42:34.773345    9068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 10:42:34.773391    9068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 10:42:34.773429    9068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 10:42:34.773474    9068 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 10:42:34.773568    9068 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 10:42:32.825916    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:39.771440    9068 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001646 seconds
	I0805 10:42:39.771553    9068 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 10:42:39.777699    9068 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 10:42:40.296745    9068 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 10:42:40.296998    9068 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-363000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 10:42:40.803378    9068 kubeadm.go:310] [bootstrap-token] Using token: y4030r.pqrajb9g358l1ucz
	I0805 10:42:40.806226    9068 out.go:204]   - Configuring RBAC rules ...
	I0805 10:42:40.806319    9068 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 10:42:40.808688    9068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 10:42:40.811693    9068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 10:42:40.812840    9068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 10:42:40.813930    9068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 10:42:40.815147    9068 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 10:42:40.819339    9068 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 10:42:41.007932    9068 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 10:42:41.210768    9068 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 10:42:41.211286    9068 kubeadm.go:310] 
	I0805 10:42:41.211320    9068 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 10:42:41.211324    9068 kubeadm.go:310] 
	I0805 10:42:41.211376    9068 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 10:42:41.211382    9068 kubeadm.go:310] 
	I0805 10:42:41.211413    9068 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 10:42:41.211447    9068 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 10:42:41.211472    9068 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 10:42:41.211474    9068 kubeadm.go:310] 
	I0805 10:42:41.211512    9068 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 10:42:41.211516    9068 kubeadm.go:310] 
	I0805 10:42:41.211545    9068 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 10:42:41.211548    9068 kubeadm.go:310] 
	I0805 10:42:41.211580    9068 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 10:42:41.211678    9068 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 10:42:41.211740    9068 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 10:42:41.211747    9068 kubeadm.go:310] 
	I0805 10:42:41.211809    9068 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 10:42:41.211860    9068 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 10:42:41.211863    9068 kubeadm.go:310] 
	I0805 10:42:41.211927    9068 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y4030r.pqrajb9g358l1ucz \
	I0805 10:42:41.212018    9068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11215ef01abcfb912d109f6d89af227ccae4ec1efb0dbe7ad4cd9a56e17c4c25 \
	I0805 10:42:41.212034    9068 kubeadm.go:310] 	--control-plane 
	I0805 10:42:41.212039    9068 kubeadm.go:310] 
	I0805 10:42:41.212080    9068 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 10:42:41.212083    9068 kubeadm.go:310] 
	I0805 10:42:41.212129    9068 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y4030r.pqrajb9g358l1ucz \
	I0805 10:42:41.212205    9068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11215ef01abcfb912d109f6d89af227ccae4ec1efb0dbe7ad4cd9a56e17c4c25 
	I0805 10:42:41.212258    9068 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 10:42:41.212263    9068 cni.go:84] Creating CNI manager for ""
	I0805 10:42:41.212272    9068 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:42:41.216445    9068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 10:42:41.221444    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 10:42:41.224474    9068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 10:42:41.231836    9068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 10:42:41.231902    9068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 10:42:41.231941    9068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-363000 minikube.k8s.io/updated_at=2024_08_05T10_42_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7ab1b4d76a5d87b75cd4b70be3ee81f93304b0ab minikube.k8s.io/name=stopped-upgrade-363000 minikube.k8s.io/primary=true
	I0805 10:42:41.276995    9068 kubeadm.go:1113] duration metric: took 45.154667ms to wait for elevateKubeSystemPrivileges
	I0805 10:42:41.277009    9068 ops.go:34] apiserver oom_adj: -16
	I0805 10:42:41.277144    9068 kubeadm.go:394] duration metric: took 4m12.352007166s to StartCluster
	I0805 10:42:41.277156    9068 settings.go:142] acquiring lock: {Name:mk1ff1cf525c2989e8f58a78ff9196d0a088a47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:42:41.277318    9068 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:42:41.277712    9068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/kubeconfig: {Name:mkf52f0a49b2ae63f3d2905c5633513b3086a0af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:42:41.277898    9068 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:42:41.277941    9068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 10:42:41.277981    9068 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-363000"
	I0805 10:42:41.277993    9068 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-363000"
	I0805 10:42:41.278003    9068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-363000"
	I0805 10:42:41.277993    9068 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-363000"
	W0805 10:42:41.278035    9068 addons.go:243] addon storage-provisioner should already be in state true
	I0805 10:42:41.278048    9068 host.go:66] Checking if "stopped-upgrade-363000" exists ...
	I0805 10:42:41.278066    9068 config.go:182] Loaded profile config "stopped-upgrade-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 10:42:41.282216    9068 out.go:177] * Verifying Kubernetes components...
	I0805 10:42:41.282850    9068 kapi.go:59] client config for stopped-upgrade-363000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/client.key", CAFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1019d02e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 10:42:41.286694    9068 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-363000"
	W0805 10:42:41.286700    9068 addons.go:243] addon default-storageclass should already be in state true
	I0805 10:42:41.286708    9068 host.go:66] Checking if "stopped-upgrade-363000" exists ...
	I0805 10:42:41.287226    9068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 10:42:41.287231    9068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 10:42:41.287236    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	I0805 10:42:41.292411    9068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:42:37.828526    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:37.828673    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:37.845304    9085 logs.go:276] 2 containers: [d4f28dbbc4f1 13057f94c0f8]
	I0805 10:42:37.845377    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:37.856686    9085 logs.go:276] 2 containers: [275aaaabca50 d71cd5277bf8]
	I0805 10:42:37.856762    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:37.867181    9085 logs.go:276] 1 containers: [d211fad9684e]
	I0805 10:42:37.867250    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:37.878098    9085 logs.go:276] 2 containers: [ef0fa267cdc3 7c8977dcd66d]
	I0805 10:42:37.878173    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:37.889271    9085 logs.go:276] 1 containers: [9df378c7864e]
	I0805 10:42:37.889339    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:37.900099    9085 logs.go:276] 2 containers: [7b6baa84c14b 2c9aa7466dbd]
	I0805 10:42:37.900171    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:37.912440    9085 logs.go:276] 0 containers: []
	W0805 10:42:37.912451    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:37.912502    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:37.922990    9085 logs.go:276] 0 containers: []
	W0805 10:42:37.923001    9085 logs.go:278] No container was found matching "storage-provisioner"
	I0805 10:42:37.923011    9085 logs.go:123] Gathering logs for coredns [d211fad9684e] ...
	I0805 10:42:37.923016    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d211fad9684e"
	I0805 10:42:37.934615    9085 logs.go:123] Gathering logs for kube-scheduler [ef0fa267cdc3] ...
	I0805 10:42:37.934627    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef0fa267cdc3"
	I0805 10:42:37.946785    9085 logs.go:123] Gathering logs for kube-proxy [9df378c7864e] ...
	I0805 10:42:37.946799    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9df378c7864e"
	I0805 10:42:37.958879    9085 logs.go:123] Gathering logs for kube-controller-manager [2c9aa7466dbd] ...
	I0805 10:42:37.958891    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c9aa7466dbd"
	I0805 10:42:37.971411    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:42:37.971423    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:37.983563    9085 logs.go:123] Gathering logs for kube-apiserver [d4f28dbbc4f1] ...
	I0805 10:42:37.983580    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4f28dbbc4f1"
	I0805 10:42:37.999826    9085 logs.go:123] Gathering logs for kube-apiserver [13057f94c0f8] ...
	I0805 10:42:37.999838    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 13057f94c0f8"
	I0805 10:42:38.011798    9085 logs.go:123] Gathering logs for kube-controller-manager [7b6baa84c14b] ...
	I0805 10:42:38.011814    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6baa84c14b"
	I0805 10:42:38.029395    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:38.029412    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:38.053617    9085 logs.go:123] Gathering logs for etcd [275aaaabca50] ...
	I0805 10:42:38.053634    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 275aaaabca50"
	I0805 10:42:38.067687    9085 logs.go:123] Gathering logs for etcd [d71cd5277bf8] ...
	I0805 10:42:38.067698    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d71cd5277bf8"
	I0805 10:42:38.086313    9085 logs.go:123] Gathering logs for kube-scheduler [7c8977dcd66d] ...
	I0805 10:42:38.086324    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c8977dcd66d"
	I0805 10:42:38.099335    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:38.099347    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:38.143935    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:38.143952    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:38.148601    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:38.148610    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:40.688887    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:41.298462    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:42:41.304387    9068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 10:42:41.304394    9068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 10:42:41.304401    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	I0805 10:42:41.372664    9068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 10:42:41.377345    9068 api_server.go:52] waiting for apiserver process to appear ...
	I0805 10:42:41.377380    9068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:42:41.381138    9068 api_server.go:72] duration metric: took 103.231458ms to wait for apiserver process to appear ...
	I0805 10:42:41.381147    9068 api_server.go:88] waiting for apiserver healthz status ...
	I0805 10:42:41.381154    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:41.388452    9068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 10:42:41.419441    9068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 10:42:45.691222    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:45.691301    9085 kubeadm.go:597] duration metric: took 4m3.865442417s to restartPrimaryControlPlane
	W0805 10:42:45.691378    9085 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 10:42:45.691411    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 10:42:46.627737    9085 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 10:42:46.632759    9085 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 10:42:46.635381    9085 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 10:42:46.638044    9085 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 10:42:46.638049    9085 kubeadm.go:157] found existing configuration files:
	
	I0805 10:42:46.638069    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/admin.conf
	I0805 10:42:46.640638    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 10:42:46.640660    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 10:42:46.643101    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/kubelet.conf
	I0805 10:42:46.645594    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 10:42:46.645616    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 10:42:46.648696    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/controller-manager.conf
	I0805 10:42:46.651391    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 10:42:46.651415    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 10:42:46.653879    9085 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/scheduler.conf
	I0805 10:42:46.656886    9085 kubeadm.go:163] "https://control-plane.minikube.internal:51256" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51256 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 10:42:46.656915    9085 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 10:42:46.659521    9085 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 10:42:46.675178    9085 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 10:42:46.675207    9085 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 10:42:46.733508    9085 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 10:42:46.733573    9085 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 10:42:46.733621    9085 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 10:42:46.788148    9085 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 10:42:46.791287    9085 out.go:204]   - Generating certificates and keys ...
	I0805 10:42:46.791318    9085 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 10:42:46.791355    9085 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 10:42:46.791395    9085 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 10:42:46.791427    9085 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 10:42:46.791462    9085 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 10:42:46.791493    9085 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 10:42:46.791538    9085 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 10:42:46.791584    9085 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 10:42:46.791658    9085 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 10:42:46.791711    9085 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 10:42:46.791735    9085 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 10:42:46.791765    9085 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 10:42:46.950090    9085 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 10:42:47.040298    9085 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 10:42:47.177214    9085 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 10:42:47.399963    9085 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 10:42:47.431002    9085 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 10:42:47.431380    9085 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 10:42:47.431403    9085 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 10:42:47.503719    9085 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 10:42:46.383179    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:46.383202    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:47.507875    9085 out.go:204]   - Booting up control plane ...
	I0805 10:42:47.507917    9085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 10:42:47.507962    9085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 10:42:47.507991    9085 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 10:42:47.508033    9085 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 10:42:47.508127    9085 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 10:42:52.012736    9085 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.505868 seconds
	I0805 10:42:52.012804    9085 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 10:42:52.016469    9085 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 10:42:52.527819    9085 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 10:42:52.527974    9085 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-952000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 10:42:53.031919    9085 kubeadm.go:310] [bootstrap-token] Using token: qtm75q.q9mybrkyko74z444
	I0805 10:42:53.033858    9085 out.go:204]   - Configuring RBAC rules ...
	I0805 10:42:53.033917    9085 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 10:42:53.034046    9085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 10:42:53.040674    9085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 10:42:53.041659    9085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 10:42:53.042633    9085 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 10:42:53.043736    9085 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 10:42:53.046840    9085 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 10:42:53.229993    9085 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 10:42:53.435877    9085 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 10:42:53.436369    9085 kubeadm.go:310] 
	I0805 10:42:53.436402    9085 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 10:42:53.436406    9085 kubeadm.go:310] 
	I0805 10:42:53.436448    9085 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 10:42:53.436454    9085 kubeadm.go:310] 
	I0805 10:42:53.436483    9085 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 10:42:53.436528    9085 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 10:42:53.436559    9085 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 10:42:53.436562    9085 kubeadm.go:310] 
	I0805 10:42:53.436593    9085 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 10:42:53.436596    9085 kubeadm.go:310] 
	I0805 10:42:53.436623    9085 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 10:42:53.436629    9085 kubeadm.go:310] 
	I0805 10:42:53.436675    9085 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 10:42:53.436715    9085 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 10:42:53.436755    9085 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 10:42:53.436758    9085 kubeadm.go:310] 
	I0805 10:42:53.436812    9085 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 10:42:53.436857    9085 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 10:42:53.436862    9085 kubeadm.go:310] 
	I0805 10:42:53.436904    9085 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qtm75q.q9mybrkyko74z444 \
	I0805 10:42:53.436955    9085 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11215ef01abcfb912d109f6d89af227ccae4ec1efb0dbe7ad4cd9a56e17c4c25 \
	I0805 10:42:53.436968    9085 kubeadm.go:310] 	--control-plane 
	I0805 10:42:53.436972    9085 kubeadm.go:310] 
	I0805 10:42:53.437031    9085 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 10:42:53.437035    9085 kubeadm.go:310] 
	I0805 10:42:53.437074    9085 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qtm75q.q9mybrkyko74z444 \
	I0805 10:42:53.437130    9085 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11215ef01abcfb912d109f6d89af227ccae4ec1efb0dbe7ad4cd9a56e17c4c25 
	I0805 10:42:53.437199    9085 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 10:42:53.437207    9085 cni.go:84] Creating CNI manager for ""
	I0805 10:42:53.437216    9085 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:42:53.444474    9085 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 10:42:53.448630    9085 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 10:42:53.452119    9085 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 10:42:53.456940    9085 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 10:42:53.456985    9085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 10:42:53.457019    9085 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-952000 minikube.k8s.io/updated_at=2024_08_05T10_42_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7ab1b4d76a5d87b75cd4b70be3ee81f93304b0ab minikube.k8s.io/name=running-upgrade-952000 minikube.k8s.io/primary=true
	I0805 10:42:53.500565    9085 kubeadm.go:1113] duration metric: took 43.619125ms to wait for elevateKubeSystemPrivileges
	I0805 10:42:53.500586    9085 ops.go:34] apiserver oom_adj: -16
	I0805 10:42:53.500685    9085 kubeadm.go:394] duration metric: took 4m11.688355375s to StartCluster
	I0805 10:42:53.500696    9085 settings.go:142] acquiring lock: {Name:mk1ff1cf525c2989e8f58a78ff9196d0a088a47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:42:53.500774    9085 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:42:53.501191    9085 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/kubeconfig: {Name:mkf52f0a49b2ae63f3d2905c5633513b3086a0af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:42:53.501373    9085 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:42:53.501427    9085 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 10:42:53.501469    9085 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-952000"
	I0805 10:42:53.501485    9085 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-952000"
	W0805 10:42:53.501489    9085 addons.go:243] addon storage-provisioner should already be in state true
	I0805 10:42:53.501501    9085 host.go:66] Checking if "running-upgrade-952000" exists ...
	I0805 10:42:53.501514    9085 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-952000"
	I0805 10:42:53.501532    9085 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-952000"
	I0805 10:42:53.501575    9085 config.go:182] Loaded profile config "running-upgrade-952000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 10:42:53.505586    9085 out.go:177] * Verifying Kubernetes components...
	I0805 10:42:53.512652    9085 kapi.go:59] client config for running-upgrade-952000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/running-upgrade-952000/client.key", CAFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103aa42e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 10:42:53.512837    9085 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:42:51.383375    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:51.383398    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:53.512930    9085 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-952000"
	W0805 10:42:53.513034    9085 addons.go:243] addon default-storageclass should already be in state true
	I0805 10:42:53.513049    9085 host.go:66] Checking if "running-upgrade-952000" exists ...
	I0805 10:42:53.514012    9085 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 10:42:53.514021    9085 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 10:42:53.514032    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	I0805 10:42:53.516763    9085 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:42:53.520834    9085 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 10:42:53.520843    9085 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 10:42:53.520851    9085 sshutil.go:53] new ssh client: &{IP:localhost Port:51192 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/running-upgrade-952000/id_rsa Username:docker}
	I0805 10:42:53.592237    9085 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 10:42:53.597575    9085 api_server.go:52] waiting for apiserver process to appear ...
	I0805 10:42:53.597626    9085 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:42:53.602187    9085 api_server.go:72] duration metric: took 100.802041ms to wait for apiserver process to appear ...
	I0805 10:42:53.602195    9085 api_server.go:88] waiting for apiserver healthz status ...
	I0805 10:42:53.602201    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:53.623285    9085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 10:42:53.630685    9085 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 10:42:56.383616    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:56.383644    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:58.604217    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:58.604244    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:01.383949    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:01.383990    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:03.604364    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:03.604399    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:06.384460    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:06.384491    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:11.385091    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:11.385133    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 10:43:11.711287    9068 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 10:43:11.716193    9068 out.go:177] * Enabled addons: storage-provisioner
	I0805 10:43:08.604674    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:08.604702    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:11.724204    9068 addons.go:510] duration metric: took 30.446681834s for enable addons: enabled=[storage-provisioner]
	I0805 10:43:13.605051    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:13.605074    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:16.385949    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:16.385996    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:18.605475    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:18.605518    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:23.606144    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:23.606213    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 10:43:23.979049    9085 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 10:43:23.987246    9085 out.go:177] * Enabled addons: storage-provisioner
	I0805 10:43:21.387122    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:21.387165    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:23.994314    9085 addons.go:510] duration metric: took 30.493296333s for enable addons: enabled=[storage-provisioner]
	I0805 10:43:26.388484    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:26.388533    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:28.607410    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:28.607443    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:31.390182    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:31.390203    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:33.608582    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:33.608660    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:36.392268    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:36.392308    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:38.610101    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:38.610140    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:41.394568    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:41.394736    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:43:41.405296    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:43:41.405369    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:43:41.415758    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:43:41.415836    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:43:41.426216    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:43:41.426281    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:43:41.436723    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:43:41.436793    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:43:41.447931    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:43:41.448003    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:43:41.458351    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:43:41.458423    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:43:41.467993    9068 logs.go:276] 0 containers: []
	W0805 10:43:41.468004    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:43:41.468059    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:43:41.478551    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:43:41.478566    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:43:41.478572    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:43:41.512534    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:43:41.512558    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:43:41.528344    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:43:41.528360    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:43:41.544979    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:43:41.544990    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:43:41.556525    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:43:41.556539    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:43:41.561259    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:43:41.561268    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:43:41.601635    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:43:41.601646    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:43:41.613410    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:43:41.613423    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:43:41.624986    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:43:41.624996    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:43:41.639655    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:43:41.639665    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:43:41.651293    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:43:41.651304    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:43:41.668177    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:43:41.668187    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:43:41.679330    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:43:41.679340    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:43:44.206058    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:43.611943    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:43.611992    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:49.208462    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:49.208854    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:43:49.241514    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:43:49.241661    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:43:49.258087    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:43:49.258177    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:43:49.273661    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:43:49.273733    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:43:49.287903    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:43:49.287971    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:43:49.298413    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:43:49.298489    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:43:49.309809    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:43:49.309881    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:43:49.320564    9068 logs.go:276] 0 containers: []
	W0805 10:43:49.320576    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:43:49.320633    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:43:49.342645    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:43:49.342662    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:43:49.342668    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:43:49.357243    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:43:49.357253    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:43:49.368913    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:43:49.368923    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:43:49.394121    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:43:49.394133    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:43:49.405483    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:43:49.405494    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:43:49.440808    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:43:49.440820    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:43:49.478715    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:43:49.478727    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:43:49.494556    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:43:49.494568    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:43:49.509731    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:43:49.509741    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:43:49.521585    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:43:49.521597    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:43:49.539165    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:43:49.539180    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:43:49.543660    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:43:49.543666    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:43:49.557920    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:43:49.557931    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:43:48.613479    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:48.613506    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:52.071731    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:53.615704    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:53.615864    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:43:53.627269    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:43:53.627340    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:43:53.637553    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:43:53.637626    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:43:53.648356    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:43:53.648427    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:43:53.658638    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:43:53.658706    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:43:53.669190    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:43:53.669260    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:43:53.679645    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:43:53.679712    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:43:53.689952    9085 logs.go:276] 0 containers: []
	W0805 10:43:53.689965    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:43:53.690027    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:43:53.700359    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:43:53.700377    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:43:53.700382    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:43:53.739011    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:43:53.739019    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:43:53.750193    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:43:53.750203    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:43:53.775588    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:43:53.775599    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:43:53.787121    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:43:53.787132    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:43:53.798838    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:43:53.798848    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:43:53.813128    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:43:53.813141    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:43:53.825146    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:43:53.825157    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:43:53.830298    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:43:53.830305    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:43:53.865066    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:43:53.865077    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:43:53.879881    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:43:53.879896    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:43:53.894143    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:43:53.894158    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:43:53.910646    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:43:53.910656    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:43:56.424104    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:57.074145    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:57.074457    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:43:57.101010    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:43:57.101132    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:43:57.118584    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:43:57.118668    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:43:57.132274    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:43:57.132337    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:43:57.147396    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:43:57.147460    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:43:57.157938    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:43:57.158007    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:43:57.168348    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:43:57.168409    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:43:57.178270    9068 logs.go:276] 0 containers: []
	W0805 10:43:57.178283    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:43:57.178340    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:43:57.188831    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:43:57.188847    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:43:57.188854    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:43:57.223960    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:43:57.223974    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:43:57.237712    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:43:57.237726    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:43:57.249824    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:43:57.249840    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:43:57.264289    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:43:57.264299    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:43:57.275872    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:43:57.275886    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:43:57.295393    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:43:57.295409    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:43:57.302204    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:43:57.302213    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:43:57.316916    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:43:57.316931    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:43:57.329146    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:43:57.329157    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:43:57.346788    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:43:57.346802    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:43:57.371581    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:43:57.371589    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:43:57.382899    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:43:57.382914    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:43:59.919605    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:01.426340    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:01.426514    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:01.444910    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:01.444995    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:01.458986    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:01.459047    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:01.471010    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:01.471069    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:01.481054    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:01.481112    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:01.495400    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:01.495470    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:01.511408    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:01.511474    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:01.521722    9085 logs.go:276] 0 containers: []
	W0805 10:44:01.521734    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:01.521788    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:01.533062    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:01.533077    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:01.533083    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:01.545035    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:01.545045    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:01.557203    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:01.557215    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:01.594182    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:01.594193    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:01.609502    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:01.609512    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:01.620855    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:01.620865    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:01.636904    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:01.636918    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:01.654329    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:01.654339    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:01.679225    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:01.679233    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:01.691789    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:01.691801    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:01.731122    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:01.731138    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:01.735570    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:01.735576    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:01.753517    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:01.753529    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:04.921936    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:04.922355    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:04.959004    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:04.959139    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:04.980927    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:04.981026    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:04.996835    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:44:04.996903    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:05.009255    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:05.009325    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:05.020264    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:05.020331    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:05.038397    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:05.038462    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:05.048623    9068 logs.go:276] 0 containers: []
	W0805 10:44:05.048633    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:05.048685    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:05.059403    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:05.059421    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:05.059426    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:05.077253    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:05.077265    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:05.090163    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:05.090175    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:05.125691    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:05.125700    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:05.159808    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:05.159822    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:05.171783    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:05.171796    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:05.183527    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:05.183538    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:05.194930    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:05.194943    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:05.212495    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:05.212505    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:05.224687    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:05.224702    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:05.249513    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:05.249521    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:05.253832    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:05.253841    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:05.268358    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:05.268372    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:04.270887    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:07.784530    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:09.273327    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:09.273578    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:09.295348    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:09.295440    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:09.311868    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:09.311947    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:09.323527    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:09.323587    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:09.333883    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:09.333949    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:09.344030    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:09.344106    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:09.359175    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:09.359238    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:09.368715    9085 logs.go:276] 0 containers: []
	W0805 10:44:09.368728    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:09.368786    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:09.379254    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:09.379268    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:09.379275    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:09.390504    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:09.390519    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:09.427251    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:09.427262    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:09.464545    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:09.464556    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:09.479832    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:09.479843    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:09.495441    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:09.495450    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:09.507329    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:09.507339    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:09.521721    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:09.521732    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:09.546555    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:09.546566    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:09.551044    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:09.551051    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:09.563200    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:09.563210    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:09.577955    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:09.577966    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:09.589469    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:09.589480    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:12.107208    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:12.786844    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:12.787193    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:12.825574    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:12.825698    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:12.844854    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:12.844953    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:12.859548    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:44:12.859625    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:12.871632    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:12.871700    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:12.887566    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:12.887637    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:12.902524    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:12.902595    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:12.912751    9068 logs.go:276] 0 containers: []
	W0805 10:44:12.912769    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:12.912825    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:12.924158    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:12.924179    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:12.924184    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:12.941761    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:12.941773    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:12.965443    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:12.965457    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:13.000662    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:13.000679    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:13.005523    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:13.005531    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:13.019242    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:13.019254    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:13.031061    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:13.031075    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:13.042730    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:13.042746    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:13.054909    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:13.054921    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:13.066083    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:13.066095    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:13.103117    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:13.103128    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:13.119395    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:13.119407    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:13.134628    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:13.134639    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:17.108570    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:17.108758    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:17.131117    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:17.131221    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:17.147343    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:17.147418    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:17.161036    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:17.161118    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:17.172374    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:17.172440    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:17.182751    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:17.182818    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:17.193249    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:17.193320    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:17.203504    9085 logs.go:276] 0 containers: []
	W0805 10:44:17.203516    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:17.203573    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:17.214211    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:17.214230    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:17.214236    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:17.250601    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:17.250610    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:17.262096    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:17.262108    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:17.276379    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:17.276391    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:17.288236    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:17.288248    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:17.299887    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:17.299898    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:17.319551    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:17.319562    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:15.647919    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:17.337099    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:17.337109    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:17.360526    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:17.360535    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:17.365130    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:17.365137    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:17.401043    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:17.401060    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:17.415584    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:17.415599    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:17.429317    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:17.429328    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:19.942751    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:20.648414    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:20.648729    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:20.680606    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:20.680732    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:20.699223    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:20.699319    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:20.712868    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:44:20.712947    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:20.725052    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:20.725123    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:20.744417    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:20.744489    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:20.755606    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:20.755680    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:20.766450    9068 logs.go:276] 0 containers: []
	W0805 10:44:20.766462    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:20.766514    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:20.777025    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:20.777041    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:20.777047    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:20.781437    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:20.781445    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:20.793160    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:20.793170    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:20.807727    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:20.807736    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:20.821282    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:20.821299    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:20.839067    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:20.839077    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:20.850789    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:20.850799    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:20.862026    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:20.862037    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:20.894888    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:20.894895    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:20.909456    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:20.909466    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:20.924174    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:20.924185    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:20.939657    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:20.939667    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:20.963202    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:20.963215    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:23.498058    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:24.945037    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:24.945161    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:24.957929    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:24.958013    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:24.968637    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:24.968707    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:24.979269    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:24.979331    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:24.989994    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:24.990061    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:25.000606    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:25.000675    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:25.010992    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:25.011064    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:25.022028    9085 logs.go:276] 0 containers: []
	W0805 10:44:25.022038    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:25.022095    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:25.032639    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:25.032658    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:25.032664    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:25.037696    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:25.037703    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:25.051805    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:25.051818    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:25.062930    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:25.062944    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:25.077937    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:25.077948    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:25.090176    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:25.090187    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:25.113419    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:25.113429    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:25.150278    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:25.150285    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:25.164113    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:25.164124    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:25.175982    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:25.175993    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:25.187085    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:25.187096    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:25.208523    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:25.208534    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:25.220452    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:25.220465    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:28.499833    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:28.500058    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:28.525945    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:28.526064    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:28.546739    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:28.546826    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:28.559706    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:44:28.559778    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:28.570816    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:28.570886    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:28.581469    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:28.581546    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:28.591605    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:28.591667    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:28.601196    9068 logs.go:276] 0 containers: []
	W0805 10:44:28.601208    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:28.601257    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:28.612434    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:28.612449    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:28.612456    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:28.623554    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:28.623565    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:28.634910    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:28.634921    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:28.655126    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:28.655137    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:28.666585    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:28.666596    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:28.677540    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:28.677551    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:28.700783    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:28.700790    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:28.738541    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:28.738552    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:28.752904    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:28.752918    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:28.764544    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:28.764556    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:28.785748    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:28.785763    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:28.811246    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:28.811256    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:28.846731    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:28.846741    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:27.760097    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:31.352871    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:32.762232    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:32.762461    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:32.784011    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:32.784124    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:32.799563    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:32.799638    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:32.812424    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:32.812500    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:32.822652    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:32.822730    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:32.833273    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:32.833340    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:32.843788    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:32.843858    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:32.853318    9085 logs.go:276] 0 containers: []
	W0805 10:44:32.853328    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:32.853379    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:32.863624    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:32.863639    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:32.863645    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:32.878988    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:32.879000    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:32.896280    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:32.896291    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:32.921728    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:32.921738    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:32.960587    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:32.960602    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:32.965310    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:32.965318    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:32.981325    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:32.981337    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:32.993396    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:32.993406    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:33.007880    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:33.007890    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:33.019470    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:33.019484    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:33.030997    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:33.031012    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:33.042600    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:33.042610    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:33.086117    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:33.086131    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:35.602731    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:36.353977    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:36.354124    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:36.371505    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:36.371587    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:36.385212    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:36.385280    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:36.396468    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:44:36.396542    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:36.407423    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:36.407491    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:36.422234    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:36.422305    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:36.432715    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:36.432786    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:36.443732    9068 logs.go:276] 0 containers: []
	W0805 10:44:36.443748    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:36.443813    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:36.454393    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:36.454410    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:36.454417    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:36.468999    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:36.469010    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:36.481045    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:36.481056    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:36.499016    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:36.499027    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:36.510189    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:36.510204    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:36.542716    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:36.542726    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:36.547845    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:36.547855    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:36.582659    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:36.582670    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:36.597342    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:36.597355    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:36.620410    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:36.620418    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:36.632298    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:36.632308    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:36.643447    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:36.643458    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:36.661003    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:36.661015    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:39.174981    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:40.603126    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:40.603556    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:40.647143    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:40.647276    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:40.667935    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:40.668050    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:40.683077    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:40.683157    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:40.695188    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:40.695260    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:40.706225    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:40.706294    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:40.717064    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:40.717132    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:40.727305    9085 logs.go:276] 0 containers: []
	W0805 10:44:40.727316    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:40.727375    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:40.738220    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:40.738235    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:40.738242    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:40.749960    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:40.749971    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:40.788623    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:40.788632    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:40.793629    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:40.793636    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:40.805658    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:40.805671    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:40.817173    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:40.817184    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:40.829647    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:40.829659    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:40.850035    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:40.850045    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:40.884947    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:40.884958    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:40.900101    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:40.900111    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:40.913975    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:40.913986    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:40.928751    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:40.928761    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:40.940595    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:40.940604    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:44.177245    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:44.177539    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:44.208191    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:44.208315    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:44.226425    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:44.226519    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:44.240536    9068 logs.go:276] 3 containers: [c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:44:44.240616    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:44.252476    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:44.252548    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:44.263520    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:44.263589    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:44.274223    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:44.274287    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:44.284736    9068 logs.go:276] 0 containers: []
	W0805 10:44:44.284749    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:44.284808    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:44.295405    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:44.295424    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:44.295431    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:44.337526    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:44.337539    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:44.359831    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:44.359845    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:44.383376    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:44.383386    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:44.415138    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:44:44.415145    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:44:44.426393    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:44.426406    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:44.443357    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:44.443368    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:44.447914    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:44.447920    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:44.462239    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:44.462250    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:44.473941    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:44.473952    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:44.497300    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:44.497311    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:44.509287    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:44.509298    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:44.521072    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:44.521084    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:44.532313    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:44.532324    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:43.465934    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:47.044948    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:48.468285    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:48.468457    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:48.491092    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:48.491175    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:48.506912    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:48.506987    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:48.526535    9085 logs.go:276] 2 containers: [09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:48.526597    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:48.537314    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:48.537375    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:48.547738    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:48.547801    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:48.558084    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:48.558142    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:48.568143    9085 logs.go:276] 0 containers: []
	W0805 10:44:48.568157    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:48.568210    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:48.578791    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:48.578806    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:48.578812    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:48.593919    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:48.593929    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:48.605780    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:48.605790    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:48.629952    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:48.629961    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:48.664732    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:48.664743    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:48.682952    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:48.682963    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:48.697038    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:48.697048    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:48.711625    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:48.711636    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:48.723709    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:48.723718    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:48.740664    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:48.740675    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:48.752078    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:48.752089    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:48.763189    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:48.763199    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:48.800104    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:48.800114    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:51.307048    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:52.047271    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:52.047383    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:52.058796    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:52.058865    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:52.069232    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:52.069302    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:52.080207    9068 logs.go:276] 3 containers: [c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:44:52.080277    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:52.090916    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:52.090988    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:52.101533    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:52.101599    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:52.112186    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:52.112256    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:52.123217    9068 logs.go:276] 0 containers: []
	W0805 10:44:52.123228    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:52.123285    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:52.138126    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:52.138142    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:52.138148    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:52.170651    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:44:52.170659    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:44:52.181632    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:52.181644    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:52.193107    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:52.193119    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:52.213846    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:52.213859    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:52.225404    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:52.225418    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:52.244227    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:52.244238    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:52.269277    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:52.269285    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:52.288509    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:52.288519    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:52.293302    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:52.293309    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:52.327262    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:52.327277    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:52.342315    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:52.342325    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:52.354195    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:52.354205    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:52.368437    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:52.368449    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:54.881438    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:56.309209    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:56.309462    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:56.327106    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:44:56.327191    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:56.340905    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:44:56.340982    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:56.353277    9085 logs.go:276] 3 containers: [911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:44:56.353344    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:56.365926    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:44:56.366001    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:56.380019    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:44:56.380088    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:56.390237    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:44:56.390301    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:56.400169    9085 logs.go:276] 0 containers: []
	W0805 10:44:56.400180    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:56.400238    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:56.418675    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:44:56.418693    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:44:56.418698    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:44:56.430743    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:56.430754    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:56.456054    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:56.456061    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:56.460752    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:44:56.460761    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:44:56.471606    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:44:56.471619    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:56.483355    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:44:56.483367    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:44:56.497260    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:44:56.497271    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:44:56.511760    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:44:56.511772    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:44:56.524486    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:44:56.524498    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:44:56.543907    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:56.543919    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:56.588380    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:44:56.588394    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:44:56.603819    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:44:56.603829    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:44:56.615253    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:56.615262    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:56.652078    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:44:56.652086    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:44:59.881656    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:59.881902    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:59.908463    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:59.908574    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:59.925213    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:59.925292    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:59.940476    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:44:59.940564    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:59.951145    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:59.951214    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:59.965208    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:59.965281    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:59.975550    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:59.975623    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:59.985954    9068 logs.go:276] 0 containers: []
	W0805 10:44:59.985966    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:59.986020    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:59.996217    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:59.996235    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:59.996241    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:00.007878    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:00.007893    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:00.020214    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:00.020227    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:00.053968    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:00.053983    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:00.068089    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:00.068102    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:00.079330    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:00.079342    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:00.098345    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:00.098356    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:00.110045    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:00.110058    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:00.132552    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:00.132564    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:00.150056    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:00.150067    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:00.174573    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:00.174583    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:00.178627    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:00.178635    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:00.190549    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:00.190561    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:00.202652    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:00.202663    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:00.237509    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:00.237519    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:44:59.165875    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:02.751237    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:04.166868    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:04.167138    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:04.199083    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:04.199213    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:04.218431    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:04.218527    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:04.233267    9085 logs.go:276] 3 containers: [911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:04.233348    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:04.245327    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:04.245399    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:04.259714    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:04.259774    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:04.271854    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:04.271915    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:04.285797    9085 logs.go:276] 0 containers: []
	W0805 10:45:04.285811    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:04.285873    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:04.296291    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:04.296309    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:04.296314    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:04.311175    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:04.311187    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:04.322636    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:04.322646    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:04.338266    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:04.338277    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:04.354768    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:04.354779    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:04.366189    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:04.366202    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:04.381198    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:04.381209    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:04.392650    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:04.392660    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:04.416913    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:04.416922    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:04.431130    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:04.431140    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:04.468381    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:04.468396    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:04.481149    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:04.481160    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:04.498984    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:04.498994    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:04.537752    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:04.537761    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:07.044185    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:07.753762    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:07.753892    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:07.767401    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:07.767482    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:07.778757    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:07.778821    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:07.793703    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:07.793774    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:07.803771    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:07.803841    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:07.814609    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:07.814675    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:07.825586    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:07.825649    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:07.836019    9068 logs.go:276] 0 containers: []
	W0805 10:45:07.836030    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:07.836083    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:07.851067    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:07.851084    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:07.851090    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:07.876345    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:07.876359    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:07.881617    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:07.881626    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:07.893471    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:07.893485    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:07.907633    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:07.907644    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:07.942973    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:07.942986    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:07.954445    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:07.954456    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:07.972193    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:07.972208    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:07.988911    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:07.988924    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:08.012456    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:08.012464    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:08.030407    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:08.030420    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:08.062597    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:08.062605    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:08.074372    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:08.074384    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:08.089704    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:08.089718    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:08.104327    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:08.104337    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:12.046362    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:12.046546    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:12.081974    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:12.082060    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:12.094554    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:12.094631    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:12.105966    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:12.106043    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:12.116811    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:12.116885    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:12.127821    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:12.127889    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:12.138493    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:12.138564    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:12.150426    9085 logs.go:276] 0 containers: []
	W0805 10:45:12.150437    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:12.150497    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:12.160826    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:12.160844    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:12.160849    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:12.195946    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:12.195958    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:12.215764    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:12.215776    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:12.240928    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:12.240938    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:12.245659    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:12.245669    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:12.257476    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:12.257486    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:12.268925    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:12.268936    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:12.280526    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:12.280538    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:12.295104    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:12.295114    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:12.306630    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:12.306642    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:12.324629    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:12.324640    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:10.620123    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:12.360989    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:12.360999    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:12.375393    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:12.375404    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:12.386736    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:12.386748    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:12.398147    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:12.398157    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:14.911446    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:15.622442    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:15.622589    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:15.635800    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:15.635873    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:15.646572    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:15.646635    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:15.657028    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:15.657101    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:15.667909    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:15.667976    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:15.678324    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:15.678395    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:15.689122    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:15.689192    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:15.699429    9068 logs.go:276] 0 containers: []
	W0805 10:45:15.699439    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:15.699492    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:15.710972    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:15.710991    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:15.710998    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:15.715291    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:15.715298    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:15.733024    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:15.733036    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:15.745038    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:15.745050    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:15.760967    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:15.760978    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:15.779317    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:15.779331    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:15.794025    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:15.794035    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:15.808046    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:15.808056    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:15.821565    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:15.821581    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:15.844892    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:15.844900    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:15.877504    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:15.877512    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:15.889447    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:15.889462    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:15.901485    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:15.901495    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:15.936162    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:15.936173    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:15.947815    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:15.947825    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:18.459874    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:19.913682    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:19.913913    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:19.931533    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:19.931625    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:19.945178    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:19.945259    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:19.958584    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:19.958662    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:19.969131    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:19.969192    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:19.979689    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:19.979756    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:19.991427    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:19.991490    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:20.001764    9085 logs.go:276] 0 containers: []
	W0805 10:45:20.001776    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:20.001829    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:20.012482    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:20.012499    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:20.012504    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:20.017409    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:20.017418    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:20.049534    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:20.049547    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:20.061159    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:20.061173    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:20.079061    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:20.079073    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:20.090379    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:20.090393    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:20.113901    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:20.113911    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:20.152414    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:20.152426    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:20.163696    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:20.163706    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:20.175017    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:20.175034    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:20.186859    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:20.186869    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:20.222490    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:20.222509    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:20.234696    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:20.234707    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:20.261001    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:20.261013    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:20.272514    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:20.272525    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:23.462185    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:23.462401    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:23.488932    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:23.489064    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:23.507331    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:23.507434    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:23.521948    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:23.522029    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:23.538578    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:23.538780    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:23.549768    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:23.549837    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:23.560206    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:23.560266    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:23.574182    9068 logs.go:276] 0 containers: []
	W0805 10:45:23.574198    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:23.574261    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:23.585906    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:23.585926    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:23.585932    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:23.599585    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:23.599596    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:23.610942    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:23.610954    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:23.626876    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:23.626887    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:23.641147    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:23.641158    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:23.658511    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:23.658523    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:23.662588    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:23.662594    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:23.677070    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:23.677081    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:23.688826    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:23.688837    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:23.702083    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:23.702094    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:23.713764    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:23.713780    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:23.746849    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:23.746858    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:23.770246    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:23.770258    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:23.805988    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:23.806000    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:23.820690    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:23.820703    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:22.792036    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:26.334727    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:27.794348    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:27.794559    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:27.814601    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:27.814691    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:27.831367    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:27.831443    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:27.843018    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:27.843089    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:27.852958    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:27.853025    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:27.862990    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:27.863062    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:27.873490    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:27.873550    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:27.883998    9085 logs.go:276] 0 containers: []
	W0805 10:45:27.884010    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:27.884066    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:27.894568    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:27.894584    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:27.894591    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:27.931257    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:27.931267    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:27.951830    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:27.951839    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:27.963972    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:27.963984    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:28.001109    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:28.001123    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:28.019923    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:28.019936    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:28.035515    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:28.035528    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:28.047589    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:28.047600    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:28.062173    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:28.062186    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:28.073849    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:28.073859    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:28.088108    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:28.088119    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:28.099629    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:28.099640    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:28.104590    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:28.104597    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:28.118322    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:28.118331    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:28.136243    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:28.136254    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:30.663926    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:31.337031    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:31.337197    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:31.348358    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:31.348429    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:31.359350    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:31.359428    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:31.371072    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:31.371141    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:31.386493    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:31.386563    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:31.396873    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:31.396938    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:31.407790    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:31.407849    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:31.425899    9068 logs.go:276] 0 containers: []
	W0805 10:45:31.425910    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:31.425972    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:31.436672    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:31.436688    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:31.436693    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:31.448688    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:31.448699    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:31.463282    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:31.463294    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:31.475939    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:31.475953    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:31.511180    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:31.511188    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:31.515605    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:31.515613    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:31.554088    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:31.554097    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:31.566103    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:31.566118    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:31.577609    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:31.577624    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:31.592557    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:31.592573    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:31.607955    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:31.607969    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:31.619497    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:31.619512    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:31.643332    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:31.643341    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:31.667650    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:31.667657    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:31.688041    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:31.688055    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:34.202058    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:35.666309    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:35.666606    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:35.694396    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:35.694504    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:35.713715    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:35.713795    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:35.727967    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:35.728048    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:35.740035    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:35.740103    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:35.750447    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:35.750512    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:35.760912    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:35.760983    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:35.778196    9085 logs.go:276] 0 containers: []
	W0805 10:45:35.778207    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:35.778264    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:35.789147    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:35.789163    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:35.789168    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:35.793749    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:35.793756    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:35.805484    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:35.805497    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:35.842542    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:35.842554    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:35.857538    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:35.857551    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:35.869513    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:35.869524    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:35.886968    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:35.886980    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:35.897999    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:35.898009    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:35.921740    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:35.921750    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:35.933372    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:35.933385    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:35.945640    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:35.945651    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:35.959866    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:35.959876    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:35.971631    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:35.971644    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:35.986990    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:35.987002    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:36.001673    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:36.001687    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:39.204447    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:39.204572    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:39.218601    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:39.218679    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:39.229964    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:39.230043    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:39.240313    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:39.240378    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:39.250607    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:39.250669    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:39.261525    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:39.261599    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:39.272001    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:39.272072    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:39.286175    9068 logs.go:276] 0 containers: []
	W0805 10:45:39.286186    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:39.286239    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:39.296881    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:39.296899    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:39.296905    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:39.316033    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:39.316043    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:39.327543    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:39.327552    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:39.341301    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:39.341313    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:39.353372    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:39.353383    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:39.365202    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:39.365213    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:39.377231    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:39.377242    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:39.410174    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:39.410184    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:39.425065    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:39.425075    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:39.437343    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:39.437354    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:39.451390    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:39.451399    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:39.468457    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:39.468467    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:39.482174    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:39.482184    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:39.506580    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:39.506594    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:39.510846    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:39.510853    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:38.543149    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:42.046448    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:43.544610    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:43.544845    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:43.563274    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:43.563369    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:43.579596    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:43.579668    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:43.591039    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:43.591111    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:43.605907    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:43.605981    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:43.615905    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:43.615972    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:43.628467    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:43.628538    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:43.638882    9085 logs.go:276] 0 containers: []
	W0805 10:45:43.638894    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:43.638953    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:43.649558    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:43.649577    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:43.649582    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:43.663862    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:43.663873    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:43.689671    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:43.689678    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:43.701363    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:43.701379    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:43.739711    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:43.739720    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:43.751605    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:43.751617    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:43.765806    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:43.765816    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:43.804361    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:43.804372    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:43.816631    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:43.816641    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:43.828682    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:43.828692    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:43.843629    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:43.843639    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:43.855262    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:43.855272    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:43.868279    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:43.868290    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:43.872753    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:43.872762    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:43.891371    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:43.891385    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:46.408308    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:47.048783    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:47.049016    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:47.067294    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:47.067394    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:47.082608    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:47.082680    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:47.094649    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:47.094715    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:47.104812    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:47.104872    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:47.116928    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:47.117004    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:47.127894    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:47.127961    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:47.139652    9068 logs.go:276] 0 containers: []
	W0805 10:45:47.139666    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:47.139725    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:47.151913    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:47.151927    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:47.151932    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:47.172947    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:47.172961    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:47.177479    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:47.177487    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:47.212205    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:47.212219    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:47.224011    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:47.224022    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:47.238616    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:47.238628    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:47.253088    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:47.253098    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:47.265368    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:47.265383    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:47.291009    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:47.291019    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:47.324852    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:47.324863    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:47.336960    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:47.336974    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:47.349333    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:47.349345    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:47.363986    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:47.363997    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:47.385286    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:47.385297    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:47.397137    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:47.397149    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:49.911418    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:51.410718    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:51.410884    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:51.426266    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:51.426340    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:51.441928    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:51.441996    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:51.456798    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:51.456877    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:51.469188    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:51.469250    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:51.486189    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:51.486247    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:51.496928    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:51.496988    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:51.507627    9085 logs.go:276] 0 containers: []
	W0805 10:45:51.507639    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:51.507699    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:51.518465    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:51.518483    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:51.518488    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:51.523250    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:51.523257    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:51.559024    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:51.559038    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:51.573582    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:51.573594    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:51.587943    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:51.587956    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:51.599641    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:51.599655    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:51.611262    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:51.611274    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:51.623476    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:51.623488    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:51.638141    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:51.638150    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:51.661800    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:51.661808    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:51.672942    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:51.672953    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:51.684463    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:51.684476    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:51.721572    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:51.721581    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:51.743758    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:51.743769    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:51.755850    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:51.755865    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:54.913753    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:54.913922    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:54.930173    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:54.930265    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:54.942831    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:54.942901    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:54.954614    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:54.954685    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:54.966015    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:54.966084    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:54.976469    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:54.976536    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:54.987143    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:54.987214    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:54.997359    9068 logs.go:276] 0 containers: []
	W0805 10:45:54.997376    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:54.997436    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:55.008152    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:55.008168    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:55.008173    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:55.050416    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:55.050427    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:55.064554    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:55.064568    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:55.085187    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:55.085198    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:55.100649    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:55.100662    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:55.126214    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:55.126234    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:55.138150    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:55.138164    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:55.150174    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:55.150186    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:55.161964    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:55.161975    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:55.180109    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:55.180122    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:55.214157    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:55.214169    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:55.218745    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:55.218759    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:55.233608    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:55.233618    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:55.245690    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:55.245700    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:55.263860    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:55.263871    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:54.270961    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:57.777359    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:59.272678    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:59.272886    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:59.295112    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:45:59.295211    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:59.312706    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:45:59.312788    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:59.325739    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:45:59.325812    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:59.336450    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:45:59.336519    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:59.346765    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:45:59.346832    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:59.356871    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:45:59.356931    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:59.366946    9085 logs.go:276] 0 containers: []
	W0805 10:45:59.366963    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:59.367022    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:59.381345    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:45:59.381362    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:45:59.381367    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:45:59.395928    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:45:59.395937    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:45:59.407897    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:45:59.407911    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:45:59.419821    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:45:59.419834    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:45:59.431378    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:59.431389    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:59.455476    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:59.455484    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:59.460047    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:45:59.460054    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:59.471563    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:45:59.471575    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:45:59.486773    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:45:59.486783    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:45:59.507580    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:59.507592    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:59.546744    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:45:59.546756    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:45:59.560840    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:45:59.560853    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:45:59.572497    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:45:59.572508    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:45:59.584553    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:45:59.584565    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:45:59.602209    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:59.602218    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:02.138536    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:02.779928    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:02.780359    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:02.814048    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:46:02.814180    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:02.838071    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:46:02.838192    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:02.852977    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:46:02.853050    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:02.865335    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:46:02.865410    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:02.876569    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:46:02.876643    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:02.887314    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:46:02.887376    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:02.902497    9068 logs.go:276] 0 containers: []
	W0805 10:46:02.902510    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:02.902570    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:02.913238    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:46:02.913256    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:46:02.913261    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:46:02.925265    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:46:02.925279    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:46:02.937317    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:46:02.937329    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:46:02.952549    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:46:02.952562    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:46:02.964657    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:02.964668    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:02.969124    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:46:02.969131    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:46:02.981088    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:46:02.981099    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:46:02.995938    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:02.995949    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:03.031110    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:46:03.031118    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:46:03.045200    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:46:03.045211    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:46:03.061814    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:46:03.061826    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:46:03.079923    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:46:03.079935    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:46:03.098752    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:03.098763    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:03.124185    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:46:03.124201    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:03.136228    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:03.136245    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:07.139658    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:07.139846    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:07.164648    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:07.164767    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:07.183462    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:07.183532    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:07.196416    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:46:07.196494    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:07.207882    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:07.207951    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:07.218361    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:07.218432    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:07.229011    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:07.229083    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:07.240426    9085 logs.go:276] 0 containers: []
	W0805 10:46:07.240437    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:07.240496    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:07.252123    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:07.252144    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:07.252149    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:07.290424    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:07.290452    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:07.303251    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:07.303265    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:07.315835    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:46:07.315845    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:46:07.327342    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:07.327353    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:07.332270    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:07.332279    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:05.674937    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:07.369305    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:07.369321    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:07.381600    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:07.381612    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:07.399631    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:07.399642    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:07.414055    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:07.414068    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:07.425964    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:07.425975    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:07.439764    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:07.439778    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:07.453249    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:07.453259    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:07.475227    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:07.475237    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:07.501237    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:07.501245    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:10.014273    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:10.677232    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:10.677520    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:10.707900    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:46:10.708030    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:10.726443    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:46:10.726542    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:10.740718    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:46:10.740802    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:10.752925    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:46:10.752993    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:10.763514    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:46:10.763587    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:10.778041    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:46:10.778110    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:10.788642    9068 logs.go:276] 0 containers: []
	W0805 10:46:10.788653    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:10.788708    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:10.799622    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:46:10.799639    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:10.799644    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:10.804690    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:46:10.804701    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:46:10.822216    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:10.822227    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:10.854798    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:46:10.854806    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:46:10.868653    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:46:10.868665    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:46:10.880134    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:46:10.880144    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:46:10.892193    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:46:10.892205    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:46:10.906684    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:46:10.906696    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:46:10.919099    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:10.919110    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:10.955351    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:46:10.955361    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:46:10.969986    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:46:10.969999    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:46:10.984094    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:46:10.984105    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:46:10.995526    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:10.995535    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:11.020868    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:46:11.020881    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:11.033709    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:46:11.033720    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:46:13.557894    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:15.016725    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:15.017107    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:15.057180    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:15.057325    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:15.080079    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:15.080174    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:15.095583    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:46:15.095665    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:15.107646    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:15.107715    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:15.118282    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:15.118353    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:15.129396    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:15.129464    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:15.139399    9085 logs.go:276] 0 containers: []
	W0805 10:46:15.139411    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:15.139469    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:15.150290    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:15.150307    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:15.150312    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:15.164666    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:15.164676    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:15.176679    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:15.176693    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:15.191826    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:15.191836    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:15.203540    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:15.203553    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:15.215499    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:15.215512    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:15.251038    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:15.251052    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:15.265366    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:15.265380    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:15.283331    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:15.283346    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:15.322184    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:15.322200    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:15.333695    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:15.333709    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:15.337977    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:15.337984    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:15.349633    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:15.349645    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:15.361358    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:46:15.361373    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:46:15.376749    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:15.376763    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:18.560155    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:18.560286    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:18.572427    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:46:18.572496    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:18.583605    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:46:18.583674    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:18.595384    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:46:18.595449    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:18.605729    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:46:18.605789    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:18.616547    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:46:18.616600    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:18.626881    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:46:18.626935    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:18.637082    9068 logs.go:276] 0 containers: []
	W0805 10:46:18.637092    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:18.637138    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:18.647784    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:46:18.647802    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:18.647807    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:18.681099    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:46:18.681106    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:46:18.693167    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:46:18.693176    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:46:18.711778    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:46:18.711791    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:46:18.723630    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:46:18.723644    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:46:18.737744    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:46:18.737754    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:46:18.751462    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:46:18.751471    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:46:18.762718    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:18.762732    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:18.785876    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:46:18.785885    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:46:18.797508    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:46:18.797518    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:46:18.808780    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:46:18.808803    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:46:18.825016    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:46:18.825029    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:18.837732    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:18.837744    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:18.841882    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:18.841890    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:18.875616    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:46:18.875629    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:46:17.903857    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:21.388389    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:22.906158    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:22.906380    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:22.934421    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:22.934552    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:22.951602    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:22.951684    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:22.965160    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:46:22.965229    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:22.976773    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:22.976845    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:22.987456    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:22.987551    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:22.998451    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:22.998513    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:23.008266    9085 logs.go:276] 0 containers: []
	W0805 10:46:23.008279    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:23.008336    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:23.019390    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:23.019407    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:23.019412    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:23.037137    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:23.037150    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:23.061970    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:23.061978    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:23.075793    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:23.075807    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:23.088571    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:23.088583    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:23.102323    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:23.102334    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:23.113889    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:23.113902    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:23.126118    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:46:23.126130    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:46:23.137691    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:23.137701    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:23.175507    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:23.175515    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:23.187327    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:23.187339    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:23.201883    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:23.201898    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:23.213279    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:23.213293    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:23.232084    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:23.232098    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:23.236871    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:23.236879    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:25.772373    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:26.390714    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:26.390888    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:26.407681    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:46:26.407760    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:26.420882    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:46:26.420952    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:26.432784    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:46:26.432849    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:26.443211    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:46:26.443273    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:26.453383    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:46:26.453455    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:26.463700    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:46:26.463772    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:26.474344    9068 logs.go:276] 0 containers: []
	W0805 10:46:26.474356    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:26.474415    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:26.485032    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:46:26.485052    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:46:26.485058    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:46:26.499476    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:46:26.499487    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:46:26.511052    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:46:26.511062    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:46:26.523187    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:46:26.523197    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:46:26.534796    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:46:26.534808    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:46:26.549855    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:46:26.549869    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:46:26.563113    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:26.563123    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:26.567346    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:26.567353    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:26.602100    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:26.602114    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:26.625101    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:46:26.625109    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:46:26.637207    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:46:26.637221    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:46:26.649267    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:26.649278    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:26.681541    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:46:26.681550    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:26.692404    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:46:26.692416    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:46:26.707520    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:46:26.707534    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:46:29.226710    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:30.774656    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:30.774919    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:30.809021    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:30.809138    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:30.826087    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:30.826172    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:30.839190    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:46:30.839267    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:30.850703    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:30.850771    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:30.860648    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:30.860719    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:30.871328    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:30.871392    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:30.881479    9085 logs.go:276] 0 containers: []
	W0805 10:46:30.881492    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:30.881549    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:30.892045    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:30.892063    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:30.892069    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:30.927362    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:30.927372    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:30.939125    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:30.939136    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:30.956220    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:30.956244    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:30.969082    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:30.969094    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:30.983832    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:30.983841    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:30.995548    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:30.995558    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:31.021131    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:31.021140    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:31.058687    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:31.058694    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:31.062910    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:31.062919    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:31.074955    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:31.074966    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:31.088063    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:31.088075    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:31.102730    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:31.102741    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:31.116213    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:31.116225    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:31.128763    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:46:31.128774    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:46:34.228228    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:34.228527    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:34.246421    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:46:34.246516    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:34.260496    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:46:34.260599    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:34.272770    9068 logs.go:276] 5 containers: [b4a7e6734dfa 9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:46:34.272844    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:34.283598    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:46:34.283667    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:34.294242    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:46:34.294305    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:34.309658    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:46:34.309724    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:34.320057    9068 logs.go:276] 0 containers: []
	W0805 10:46:34.320071    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:34.320133    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:34.330717    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:46:34.330734    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:46:34.330740    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:46:34.348581    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:46:34.348592    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:46:34.362522    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:34.362532    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:34.395975    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:46:34.395990    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:46:34.413338    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:46:34.413350    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:34.424971    9068 logs.go:123] Gathering logs for coredns [b4a7e6734dfa] ...
	I0805 10:46:34.424981    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4a7e6734dfa"
	I0805 10:46:34.436423    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:46:34.436446    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:46:34.448656    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:46:34.448667    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:46:34.463695    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:46:34.463706    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	W0805 10:46:34.474081    9068 logs.go:130] failed coredns [1a7c8223b623]: command: /bin/bash -c "docker logs --tail 400 1a7c8223b623" /bin/bash -c "docker logs --tail 400 1a7c8223b623": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 1a7c8223b623
	 output: 
	** stderr ** 
	Error: No such container: 1a7c8223b623
	
	** /stderr **
	I0805 10:46:34.474088    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:46:34.474095    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:46:34.491885    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:34.491895    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:34.516452    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:34.516461    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:34.520879    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:34.520886    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:34.557013    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:46:34.557025    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:46:34.568283    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:46:34.568295    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:46:34.584705    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:46:34.584716    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:46:33.642330    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:37.104850    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:42.107273    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:42.111972    9068 out.go:177] 
	W0805 10:46:42.115978    9068 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0805 10:46:42.115995    9068 out.go:239] * 
	W0805 10:46:42.117203    9068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:46:42.127901    9068 out.go:177] 
	I0805 10:46:38.644580    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:38.644713    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:38.656175    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:38.656248    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:38.666778    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:38.666848    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:38.678383    9085 logs.go:276] 4 containers: [d08309a6b024 911b32609175 09cf1cd1eb79 3c0b270bfc85]
	I0805 10:46:38.678455    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:38.689609    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:38.689675    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:38.699947    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:38.700016    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:38.710832    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:38.710904    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:38.720964    9085 logs.go:276] 0 containers: []
	W0805 10:46:38.720977    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:38.721032    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:38.730946    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:38.730966    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:38.730971    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:38.750580    9085 logs.go:123] Gathering logs for coredns [3c0b270bfc85] ...
	I0805 10:46:38.750592    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c0b270bfc85"
	I0805 10:46:38.762216    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:38.762228    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:38.773742    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:38.773754    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:38.778365    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:38.778376    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:38.791169    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:38.791179    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:38.802859    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:38.802870    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:38.815996    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:38.816007    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:38.855249    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:38.855259    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:38.891701    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:38.891712    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:38.903498    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:38.903509    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:38.915429    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:38.915440    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:38.929958    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:38.929974    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:38.944123    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:38.944134    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:38.962065    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:38.962078    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:41.489303    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:46.491947    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:46.492062    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:46.504497    9085 logs.go:276] 1 containers: [a10e618b4b87]
	I0805 10:46:46.504570    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:46.515235    9085 logs.go:276] 1 containers: [dd283aa612f7]
	I0805 10:46:46.515314    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:46.526413    9085 logs.go:276] 4 containers: [ac67f2851614 d08309a6b024 911b32609175 09cf1cd1eb79]
	I0805 10:46:46.526483    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:46.536988    9085 logs.go:276] 1 containers: [4cd2114f032c]
	I0805 10:46:46.537049    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:46.550011    9085 logs.go:276] 1 containers: [0b4747b7c71b]
	I0805 10:46:46.550071    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:46.565711    9085 logs.go:276] 1 containers: [6b61a4d7e65e]
	I0805 10:46:46.565779    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:46.576383    9085 logs.go:276] 0 containers: []
	W0805 10:46:46.576394    9085 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:46.576446    9085 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:46.587936    9085 logs.go:276] 1 containers: [5ffcf9115c5b]
	I0805 10:46:46.587954    9085 logs.go:123] Gathering logs for kube-apiserver [a10e618b4b87] ...
	I0805 10:46:46.587959    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a10e618b4b87"
	I0805 10:46:46.602178    9085 logs.go:123] Gathering logs for kube-proxy [0b4747b7c71b] ...
	I0805 10:46:46.602192    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b4747b7c71b"
	I0805 10:46:46.614624    9085 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:46.614637    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:46.649493    9085 logs.go:123] Gathering logs for coredns [d08309a6b024] ...
	I0805 10:46:46.649507    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d08309a6b024"
	I0805 10:46:46.661745    9085 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:46.661759    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:46.698054    9085 logs.go:123] Gathering logs for coredns [09cf1cd1eb79] ...
	I0805 10:46:46.698063    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09cf1cd1eb79"
	I0805 10:46:46.709603    9085 logs.go:123] Gathering logs for kube-scheduler [4cd2114f032c] ...
	I0805 10:46:46.709618    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4cd2114f032c"
	I0805 10:46:46.723866    9085 logs.go:123] Gathering logs for kube-controller-manager [6b61a4d7e65e] ...
	I0805 10:46:46.723881    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b61a4d7e65e"
	I0805 10:46:46.743287    9085 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:46.743297    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:46.767654    9085 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:46.767669    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:46.772579    9085 logs.go:123] Gathering logs for etcd [dd283aa612f7] ...
	I0805 10:46:46.772586    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd283aa612f7"
	I0805 10:46:46.789351    9085 logs.go:123] Gathering logs for coredns [ac67f2851614] ...
	I0805 10:46:46.789364    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac67f2851614"
	I0805 10:46:46.800649    9085 logs.go:123] Gathering logs for coredns [911b32609175] ...
	I0805 10:46:46.800663    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 911b32609175"
	I0805 10:46:46.812179    9085 logs.go:123] Gathering logs for storage-provisioner [5ffcf9115c5b] ...
	I0805 10:46:46.812195    9085 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ffcf9115c5b"
	I0805 10:46:46.827013    9085 logs.go:123] Gathering logs for container status ...
	I0805 10:46:46.827027    9085 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:49.341110    9085 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:54.343388    9085 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:54.346948    9085 out.go:177] 
	W0805 10:46:54.350955    9085 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0805 10:46:54.350970    9085 out.go:239] * 
	W0805 10:46:54.351931    9085 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:46:54.363002    9085 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-08-05 17:37:44 UTC, ends at Mon 2024-08-05 17:47:10 UTC. --
	Aug 05 17:46:55 running-upgrade-952000 dockerd[3222]: time="2024-08-05T17:46:55.215471936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 17:46:55 running-upgrade-952000 dockerd[3222]: time="2024-08-05T17:46:55.215512144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 17:46:55 running-upgrade-952000 dockerd[3222]: time="2024-08-05T17:46:55.215518227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 17:46:55 running-upgrade-952000 dockerd[3222]: time="2024-08-05T17:46:55.215565935Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/83f50b1cfc17e257e17d7075fbcfa3f377e23898db467f0c2513ae326d4d83b3 pid=18572 runtime=io.containerd.runc.v2
	Aug 05 17:46:55 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:46:55Z" level=error msg="ContainerStats resp: {0x4000a594c0 linux}"
	Aug 05 17:46:56 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:46:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 05 17:46:56 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:46:56Z" level=error msg="ContainerStats resp: {0x4000917a80 linux}"
	Aug 05 17:46:56 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:46:56Z" level=error msg="ContainerStats resp: {0x4000917bc0 linux}"
	Aug 05 17:46:56 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:46:56Z" level=error msg="ContainerStats resp: {0x4000904740 linux}"
	Aug 05 17:46:56 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:46:56Z" level=error msg="ContainerStats resp: {0x400081a5c0 linux}"
	Aug 05 17:46:56 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:46:56Z" level=error msg="ContainerStats resp: {0x40009b0bc0 linux}"
	Aug 05 17:46:56 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:46:56Z" level=error msg="ContainerStats resp: {0x40009b1280 linux}"
	Aug 05 17:46:56 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:46:56Z" level=error msg="ContainerStats resp: {0x40009b1a00 linux}"
	Aug 05 17:47:01 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 05 17:47:06 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:06Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 05 17:47:06 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:06Z" level=error msg="ContainerStats resp: {0x4000a58a80 linux}"
	Aug 05 17:47:06 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:06Z" level=error msg="ContainerStats resp: {0x4000a597c0 linux}"
	Aug 05 17:47:07 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:07Z" level=error msg="ContainerStats resp: {0x40006c4040 linux}"
	Aug 05 17:47:08 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:08Z" level=error msg="ContainerStats resp: {0x40006c4b80 linux}"
	Aug 05 17:47:08 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:08Z" level=error msg="ContainerStats resp: {0x40006c5ec0 linux}"
	Aug 05 17:47:08 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:08Z" level=error msg="ContainerStats resp: {0x40003a1840 linux}"
	Aug 05 17:47:08 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:08Z" level=error msg="ContainerStats resp: {0x400081a0c0 linux}"
	Aug 05 17:47:08 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:08Z" level=error msg="ContainerStats resp: {0x400081a540 linux}"
	Aug 05 17:47:08 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:08Z" level=error msg="ContainerStats resp: {0x400081a980 linux}"
	Aug 05 17:47:08 running-upgrade-952000 cri-dockerd[3064]: time="2024-08-05T17:47:08Z" level=error msg="ContainerStats resp: {0x400081aec0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	83f50b1cfc17e       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   8c0291b6162a9
	ac67f2851614a       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   bf235f9ec2a6a
	d08309a6b0242       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   8c0291b6162a9
	911b326091754       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   bf235f9ec2a6a
	0b4747b7c71b6       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   a07b523756777
	5ffcf9115c5b6       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   c0986b4956e1f
	dd283aa612f73       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   fbda48f132fd8
	4cd2114f032cb       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   1a3004c323edf
	6b61a4d7e65ef       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   9324bca825502
	a10e618b4b873       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   21d2b7c9ce502
	
	
	==> coredns [83f50b1cfc17] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3828559885344181369.5740933802498858057. HINFO: read udp 10.244.0.3:42629->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3828559885344181369.5740933802498858057. HINFO: read udp 10.244.0.3:51785->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3828559885344181369.5740933802498858057. HINFO: read udp 10.244.0.3:36487->10.0.2.3:53: i/o timeout
	
	
	==> coredns [911b32609175] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8248811134491543803.4376579858617962006. HINFO: read udp 10.244.0.2:37871->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8248811134491543803.4376579858617962006. HINFO: read udp 10.244.0.2:53821->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8248811134491543803.4376579858617962006. HINFO: read udp 10.244.0.2:39455->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8248811134491543803.4376579858617962006. HINFO: read udp 10.244.0.2:35180->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8248811134491543803.4376579858617962006. HINFO: read udp 10.244.0.2:40089->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8248811134491543803.4376579858617962006. HINFO: read udp 10.244.0.2:32945->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8248811134491543803.4376579858617962006. HINFO: read udp 10.244.0.2:43442->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8248811134491543803.4376579858617962006. HINFO: read udp 10.244.0.2:50159->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8248811134491543803.4376579858617962006. HINFO: read udp 10.244.0.2:49709->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8248811134491543803.4376579858617962006. HINFO: read udp 10.244.0.2:47168->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ac67f2851614] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 408431938114847396.8155574525151491623. HINFO: read udp 10.244.0.2:40560->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 408431938114847396.8155574525151491623. HINFO: read udp 10.244.0.2:56365->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 408431938114847396.8155574525151491623. HINFO: read udp 10.244.0.2:33565->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 408431938114847396.8155574525151491623. HINFO: read udp 10.244.0.2:36740->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 408431938114847396.8155574525151491623. HINFO: read udp 10.244.0.2:49243->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 408431938114847396.8155574525151491623. HINFO: read udp 10.244.0.2:50258->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 408431938114847396.8155574525151491623. HINFO: read udp 10.244.0.2:48186->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d08309a6b024] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1498764964797212291.4906961763069350234. HINFO: read udp 10.244.0.3:56957->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1498764964797212291.4906961763069350234. HINFO: read udp 10.244.0.3:35877->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1498764964797212291.4906961763069350234. HINFO: read udp 10.244.0.3:50421->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1498764964797212291.4906961763069350234. HINFO: read udp 10.244.0.3:34930->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1498764964797212291.4906961763069350234. HINFO: read udp 10.244.0.3:38925->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1498764964797212291.4906961763069350234. HINFO: read udp 10.244.0.3:50236->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1498764964797212291.4906961763069350234. HINFO: read udp 10.244.0.3:52081->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1498764964797212291.4906961763069350234. HINFO: read udp 10.244.0.3:43884->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1498764964797212291.4906961763069350234. HINFO: read udp 10.244.0.3:45440->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1498764964797212291.4906961763069350234. HINFO: read udp 10.244.0.3:34335->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-952000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-952000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ab1b4d76a5d87b75cd4b70be3ee81f93304b0ab
	                    minikube.k8s.io/name=running-upgrade-952000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T10_42_53_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 17:42:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-952000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 17:47:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 17:42:53 +0000   Mon, 05 Aug 2024 17:42:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 17:42:53 +0000   Mon, 05 Aug 2024 17:42:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 17:42:53 +0000   Mon, 05 Aug 2024 17:42:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 17:42:53 +0000   Mon, 05 Aug 2024 17:42:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-952000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 68ef595be2b5450ead271564ab42c460
	  System UUID:                68ef595be2b5450ead271564ab42c460
	  Boot ID:                    f6592d1c-8b44-4ba2-8507-37ed21cd2e66
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-67ptx                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-xjcsp                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-952000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-952000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-952000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-2mhw8                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-952000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-952000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-952000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-952000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-952000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-952000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-952000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-952000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m4s                   node-controller  Node running-upgrade-952000 event: Registered Node running-upgrade-952000 in Controller
	
	
	==> dmesg <==
	[  +1.740736] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.066555] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.059967] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.140140] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.072519] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.062008] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[Aug 5 17:38] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[ +10.119186] systemd-fstab-generator[1922]: Ignoring "noauto" for root device
	[ +11.701140] systemd-fstab-generator[2213]: Ignoring "noauto" for root device
	[  +0.130069] systemd-fstab-generator[2248]: Ignoring "noauto" for root device
	[  +0.078898] systemd-fstab-generator[2259]: Ignoring "noauto" for root device
	[  +0.079448] systemd-fstab-generator[2272]: Ignoring "noauto" for root device
	[ +12.563315] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.230188] systemd-fstab-generator[3019]: Ignoring "noauto" for root device
	[  +0.070953] systemd-fstab-generator[3032]: Ignoring "noauto" for root device
	[  +0.065377] systemd-fstab-generator[3043]: Ignoring "noauto" for root device
	[  +0.075129] systemd-fstab-generator[3057]: Ignoring "noauto" for root device
	[  +2.488856] systemd-fstab-generator[3209]: Ignoring "noauto" for root device
	[  +3.203883] systemd-fstab-generator[3882]: Ignoring "noauto" for root device
	[  +1.409295] systemd-fstab-generator[4272]: Ignoring "noauto" for root device
	[ +17.061034] kauditd_printk_skb: 68 callbacks suppressed
	[Aug 5 17:39] kauditd_printk_skb: 19 callbacks suppressed
	[Aug 5 17:42] systemd-fstab-generator[11597]: Ignoring "noauto" for root device
	[  +5.619834] systemd-fstab-generator[12193]: Ignoring "noauto" for root device
	[  +0.462393] systemd-fstab-generator[12333]: Ignoring "noauto" for root device
	
	
	==> etcd [dd283aa612f7] <==
	{"level":"info","ts":"2024-08-05T17:42:48.684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-05T17:42:48.684Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-05T17:42:48.685Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T17:42:48.688Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T17:42:48.692Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T17:42:48.692Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-05T17:42:48.692Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-05T17:42:49.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T17:42:49.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T17:42:49.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-05T17:42:49.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T17:42:49.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-05T17:42:49.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T17:42:49.254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-05T17:42:49.254Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-952000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T17:42:49.255Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T17:42:49.256Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T17:42:49.257Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T17:42:49.257Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T17:42:49.257Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T17:42:49.257Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T17:42:49.257Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T17:42:49.257Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T17:42:49.257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T17:42:49.258Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 17:47:10 up 9 min,  0 users,  load average: 0.14, 0.15, 0.09
	Linux running-upgrade-952000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a10e618b4b87] <==
	I0805 17:42:50.644816       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0805 17:42:50.649985       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 17:42:50.650000       1 cache.go:39] Caches are synced for autoregister controller
	I0805 17:42:50.650021       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0805 17:42:50.650134       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0805 17:42:50.653318       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 17:42:50.678872       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0805 17:42:51.385293       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0805 17:42:51.551942       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0805 17:42:51.553288       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0805 17:42:51.553297       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 17:42:51.671378       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 17:42:51.682144       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 17:42:51.724060       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0805 17:42:51.726362       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0805 17:42:51.726788       1 controller.go:611] quota admission added evaluator for: endpoints
	I0805 17:42:51.728316       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 17:42:52.724661       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0805 17:42:53.249045       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0805 17:42:53.254091       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0805 17:42:53.258969       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0805 17:42:53.299359       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 17:43:06.836161       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0805 17:43:07.032636       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0805 17:43:07.365955       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [6b61a4d7e65e] <==
	I0805 17:43:06.830036       1 shared_informer.go:262] Caches are synced for PVC protection
	I0805 17:43:06.833120       1 shared_informer.go:262] Caches are synced for node
	I0805 17:43:06.833171       1 range_allocator.go:173] Starting range CIDR allocator
	I0805 17:43:06.833190       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0805 17:43:06.833205       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0805 17:43:06.835486       1 shared_informer.go:262] Caches are synced for persistent volume
	I0805 17:43:06.837383       1 range_allocator.go:374] Set node running-upgrade-952000 PodCIDR to [10.244.0.0/24]
	I0805 17:43:06.837859       1 shared_informer.go:262] Caches are synced for ephemeral
	I0805 17:43:06.839169       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-2mhw8"
	I0805 17:43:06.869171       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0805 17:43:06.870304       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0805 17:43:06.932148       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0805 17:43:07.019450       1 shared_informer.go:262] Caches are synced for disruption
	I0805 17:43:07.019487       1 disruption.go:371] Sending events to api server.
	I0805 17:43:07.028605       1 shared_informer.go:262] Caches are synced for deployment
	I0805 17:43:07.029692       1 shared_informer.go:262] Caches are synced for resource quota
	I0805 17:43:07.033789       1 shared_informer.go:262] Caches are synced for cronjob
	I0805 17:43:07.034022       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0805 17:43:07.037925       1 shared_informer.go:262] Caches are synced for resource quota
	I0805 17:43:07.040647       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0805 17:43:07.045080       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-xjcsp"
	I0805 17:43:07.049033       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-67ptx"
	I0805 17:43:07.453766       1 shared_informer.go:262] Caches are synced for garbage collector
	I0805 17:43:07.469943       1 shared_informer.go:262] Caches are synced for garbage collector
	I0805 17:43:07.469955       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [0b4747b7c71b] <==
	I0805 17:43:07.354678       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0805 17:43:07.354703       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0805 17:43:07.354724       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0805 17:43:07.363851       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0805 17:43:07.363862       1 server_others.go:206] "Using iptables Proxier"
	I0805 17:43:07.363965       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0805 17:43:07.364205       1 server.go:661] "Version info" version="v1.24.1"
	I0805 17:43:07.364243       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 17:43:07.364568       1 config.go:317] "Starting service config controller"
	I0805 17:43:07.364579       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0805 17:43:07.364631       1 config.go:226] "Starting endpoint slice config controller"
	I0805 17:43:07.364636       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0805 17:43:07.364930       1 config.go:444] "Starting node config controller"
	I0805 17:43:07.364952       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0805 17:43:07.464824       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0805 17:43:07.464825       1 shared_informer.go:262] Caches are synced for service config
	I0805 17:43:07.465008       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [4cd2114f032c] <==
	W0805 17:42:50.611797       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 17:42:50.612205       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 17:42:50.611809       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 17:42:50.612261       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 17:42:50.611819       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 17:42:50.612312       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 17:42:50.611829       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 17:42:50.612345       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 17:42:50.611840       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 17:42:50.612394       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 17:42:50.611855       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 17:42:50.612431       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 17:42:50.611869       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 17:42:50.612492       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 17:42:50.611886       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 17:42:50.612532       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 17:42:50.611539       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 17:42:50.612590       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 17:42:51.443963       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 17:42:51.443979       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 17:42:51.480994       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 17:42:51.481032       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 17:42:51.491478       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 17:42:51.491507       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0805 17:42:52.013188       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-08-05 17:37:44 UTC, ends at Mon 2024-08-05 17:47:10 UTC. --
	Aug 05 17:42:53 running-upgrade-952000 kubelet[12203]: I0805 17:42:53.500148   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/529823204bb207fdb87dcae88e927df8-ca-certs\") pod \"kube-apiserver-running-upgrade-952000\" (UID: \"529823204bb207fdb87dcae88e927df8\") " pod="kube-system/kube-apiserver-running-upgrade-952000"
	Aug 05 17:42:53 running-upgrade-952000 kubelet[12203]: I0805 17:42:53.500163   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/529823204bb207fdb87dcae88e927df8-k8s-certs\") pod \"kube-apiserver-running-upgrade-952000\" (UID: \"529823204bb207fdb87dcae88e927df8\") " pod="kube-system/kube-apiserver-running-upgrade-952000"
	Aug 05 17:42:53 running-upgrade-952000 kubelet[12203]: I0805 17:42:53.500173   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/39563d247a35a0023b54c62832164c59-ca-certs\") pod \"kube-controller-manager-running-upgrade-952000\" (UID: \"39563d247a35a0023b54c62832164c59\") " pod="kube-system/kube-controller-manager-running-upgrade-952000"
	Aug 05 17:42:53 running-upgrade-952000 kubelet[12203]: I0805 17:42:53.500182   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/39563d247a35a0023b54c62832164c59-flexvolume-dir\") pod \"kube-controller-manager-running-upgrade-952000\" (UID: \"39563d247a35a0023b54c62832164c59\") " pod="kube-system/kube-controller-manager-running-upgrade-952000"
	Aug 05 17:42:54 running-upgrade-952000 kubelet[12203]: I0805 17:42:54.275400   12203 apiserver.go:52] "Watching apiserver"
	Aug 05 17:42:54 running-upgrade-952000 kubelet[12203]: I0805 17:42:54.713823   12203 reconciler.go:157] "Reconciler: start to sync state"
	Aug 05 17:42:54 running-upgrade-952000 kubelet[12203]: E0805 17:42:54.875982   12203 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-952000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-952000"
	Aug 05 17:43:06 running-upgrade-952000 kubelet[12203]: I0805 17:43:06.755208   12203 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 17:43:06 running-upgrade-952000 kubelet[12203]: I0805 17:43:06.841934   12203 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 17:43:06 running-upgrade-952000 kubelet[12203]: I0805 17:43:06.881879   12203 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 05 17:43:06 running-upgrade-952000 kubelet[12203]: I0805 17:43:06.882032   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a6f8f3f7-8a96-4bb3-8c4a-0dfb68be25e8-tmp\") pod \"storage-provisioner\" (UID: \"a6f8f3f7-8a96-4bb3-8c4a-0dfb68be25e8\") " pod="kube-system/storage-provisioner"
	Aug 05 17:43:06 running-upgrade-952000 kubelet[12203]: I0805 17:43:06.882051   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmnpq\" (UniqueName: \"kubernetes.io/projected/95cfdaa4-b321-46be-8bf9-1fa73c871717-kube-api-access-xmnpq\") pod \"kube-proxy-2mhw8\" (UID: \"95cfdaa4-b321-46be-8bf9-1fa73c871717\") " pod="kube-system/kube-proxy-2mhw8"
	Aug 05 17:43:06 running-upgrade-952000 kubelet[12203]: I0805 17:43:06.882062   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649qk\" (UniqueName: \"kubernetes.io/projected/a6f8f3f7-8a96-4bb3-8c4a-0dfb68be25e8-kube-api-access-649qk\") pod \"storage-provisioner\" (UID: \"a6f8f3f7-8a96-4bb3-8c4a-0dfb68be25e8\") " pod="kube-system/storage-provisioner"
	Aug 05 17:43:06 running-upgrade-952000 kubelet[12203]: I0805 17:43:06.882072   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95cfdaa4-b321-46be-8bf9-1fa73c871717-xtables-lock\") pod \"kube-proxy-2mhw8\" (UID: \"95cfdaa4-b321-46be-8bf9-1fa73c871717\") " pod="kube-system/kube-proxy-2mhw8"
	Aug 05 17:43:06 running-upgrade-952000 kubelet[12203]: I0805 17:43:06.882082   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95cfdaa4-b321-46be-8bf9-1fa73c871717-lib-modules\") pod \"kube-proxy-2mhw8\" (UID: \"95cfdaa4-b321-46be-8bf9-1fa73c871717\") " pod="kube-system/kube-proxy-2mhw8"
	Aug 05 17:43:06 running-upgrade-952000 kubelet[12203]: I0805 17:43:06.882092   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/95cfdaa4-b321-46be-8bf9-1fa73c871717-kube-proxy\") pod \"kube-proxy-2mhw8\" (UID: \"95cfdaa4-b321-46be-8bf9-1fa73c871717\") " pod="kube-system/kube-proxy-2mhw8"
	Aug 05 17:43:06 running-upgrade-952000 kubelet[12203]: I0805 17:43:06.882409   12203 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 05 17:43:07 running-upgrade-952000 kubelet[12203]: I0805 17:43:07.047730   12203 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 17:43:07 running-upgrade-952000 kubelet[12203]: I0805 17:43:07.055983   12203 topology_manager.go:200] "Topology Admit Handler"
	Aug 05 17:43:07 running-upgrade-952000 kubelet[12203]: I0805 17:43:07.186412   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70c4dcb1-f07c-413d-b5a2-810b122933e6-config-volume\") pod \"coredns-6d4b75cb6d-67ptx\" (UID: \"70c4dcb1-f07c-413d-b5a2-810b122933e6\") " pod="kube-system/coredns-6d4b75cb6d-67ptx"
	Aug 05 17:43:07 running-upgrade-952000 kubelet[12203]: I0805 17:43:07.186435   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65d8q\" (UniqueName: \"kubernetes.io/projected/70c4dcb1-f07c-413d-b5a2-810b122933e6-kube-api-access-65d8q\") pod \"coredns-6d4b75cb6d-67ptx\" (UID: \"70c4dcb1-f07c-413d-b5a2-810b122933e6\") " pod="kube-system/coredns-6d4b75cb6d-67ptx"
	Aug 05 17:43:07 running-upgrade-952000 kubelet[12203]: I0805 17:43:07.186447   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpghb\" (UniqueName: \"kubernetes.io/projected/ac1efa30-3746-41a1-8968-9049b73de1dc-kube-api-access-fpghb\") pod \"coredns-6d4b75cb6d-xjcsp\" (UID: \"ac1efa30-3746-41a1-8968-9049b73de1dc\") " pod="kube-system/coredns-6d4b75cb6d-xjcsp"
	Aug 05 17:43:07 running-upgrade-952000 kubelet[12203]: I0805 17:43:07.186458   12203 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac1efa30-3746-41a1-8968-9049b73de1dc-config-volume\") pod \"coredns-6d4b75cb6d-xjcsp\" (UID: \"ac1efa30-3746-41a1-8968-9049b73de1dc\") " pod="kube-system/coredns-6d4b75cb6d-xjcsp"
	Aug 05 17:46:45 running-upgrade-952000 kubelet[12203]: I0805 17:46:45.487279   12203 scope.go:110] "RemoveContainer" containerID="3c0b270bfc85115a0ff44773c9563b31cf8e912a7d49eaadf679ec452f20d934"
	Aug 05 17:46:55 running-upgrade-952000 kubelet[12203]: I0805 17:46:55.571762   12203 scope.go:110] "RemoveContainer" containerID="09cf1cd1eb79bdd2aeb5f90f9cd69b1f3a0691c4def0f93c4861785eb1b622b7"
	
	
	==> storage-provisioner [5ffcf9115c5b] <==
	I0805 17:43:07.292890       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 17:43:07.300063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 17:43:07.300161       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 17:43:07.305293       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 17:43:07.306392       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-952000_f19fa18a-a58c-4afb-bc65-9376c71b2c64!
	I0805 17:43:07.307040       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e7ed3c3f-cc27-42a9-a4bb-bd658ba67a1d", APIVersion:"v1", ResourceVersion:"363", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-952000_f19fa18a-a58c-4afb-bc65-9376c71b2c64 became leader
	I0805 17:43:07.406623       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-952000_f19fa18a-a58c-4afb-bc65-9376c71b2c64!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-952000 -n running-upgrade-952000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-952000 -n running-upgrade-952000: exit status 2 (15.694020667s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-952000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-952000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-952000
--- FAIL: TestRunningBinaryUpgrade (626.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-234000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-234000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.994032541s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-234000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-234000" primary control-plane node in "kubernetes-upgrade-234000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-234000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:36:43.243396    8949 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:36:43.243535    8949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:36:43.243538    8949 out.go:304] Setting ErrFile to fd 2...
	I0805 10:36:43.243541    8949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:36:43.243671    8949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:36:43.244762    8949 out.go:298] Setting JSON to false
	I0805 10:36:43.260920    8949 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5773,"bootTime":1722873630,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:36:43.260992    8949 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:36:43.266033    8949 out.go:177] * [kubernetes-upgrade-234000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:36:43.272276    8949 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:36:43.272344    8949 notify.go:220] Checking for updates...
	I0805 10:36:43.279182    8949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:36:43.282203    8949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:36:43.285198    8949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:36:43.288226    8949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:36:43.291198    8949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:36:43.294480    8949 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:36:43.294540    8949 config.go:182] Loaded profile config "offline-docker-828000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:36:43.294615    8949 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:36:43.299194    8949 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:36:43.306268    8949 start.go:297] selected driver: qemu2
	I0805 10:36:43.306275    8949 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:36:43.306282    8949 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:36:43.308557    8949 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:36:43.311183    8949 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:36:43.314244    8949 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 10:36:43.314261    8949 cni.go:84] Creating CNI manager for ""
	I0805 10:36:43.314267    8949 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 10:36:43.314290    8949 start.go:340] cluster config:
	{Name:kubernetes-upgrade-234000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:36:43.317992    8949 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:43.324214    8949 out.go:177] * Starting "kubernetes-upgrade-234000" primary control-plane node in "kubernetes-upgrade-234000" cluster
	I0805 10:36:43.328185    8949 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 10:36:43.328205    8949 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 10:36:43.328222    8949 cache.go:56] Caching tarball of preloaded images
	I0805 10:36:43.328285    8949 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:36:43.328294    8949 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 10:36:43.328355    8949 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/kubernetes-upgrade-234000/config.json ...
	I0805 10:36:43.328372    8949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/kubernetes-upgrade-234000/config.json: {Name:mkbbb72a2f965f0eea2a476729d4886aea7e3fbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:36:43.328786    8949 start.go:360] acquireMachinesLock for kubernetes-upgrade-234000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:36:43.348124    8949 start.go:364] duration metric: took 19.328917ms to acquireMachinesLock for "kubernetes-upgrade-234000"
	I0805 10:36:43.348145    8949 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:36:43.348219    8949 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:36:43.357230    8949 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:36:43.376454    8949 start.go:159] libmachine.API.Create for "kubernetes-upgrade-234000" (driver="qemu2")
	I0805 10:36:43.376480    8949 client.go:168] LocalClient.Create starting
	I0805 10:36:43.376547    8949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:36:43.376580    8949 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:43.376590    8949 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:43.376632    8949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:36:43.376656    8949 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:43.376664    8949 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:43.380558    8949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:36:43.612684    8949 main.go:141] libmachine: Creating SSH key...
	I0805 10:36:43.771291    8949 main.go:141] libmachine: Creating Disk image...
	I0805 10:36:43.771301    8949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:36:43.771511    8949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2
	I0805 10:36:43.780873    8949 main.go:141] libmachine: STDOUT: 
	I0805 10:36:43.780894    8949 main.go:141] libmachine: STDERR: 
	I0805 10:36:43.780954    8949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2 +20000M
	I0805 10:36:43.788930    8949 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:36:43.788946    8949 main.go:141] libmachine: STDERR: 
	I0805 10:36:43.788961    8949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2
	I0805 10:36:43.788966    8949 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:36:43.788979    8949 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:36:43.789014    8949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:1b:04:d6:07:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2
	I0805 10:36:43.790608    8949 main.go:141] libmachine: STDOUT: 
	I0805 10:36:43.790624    8949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:36:43.790639    8949 client.go:171] duration metric: took 414.160083ms to LocalClient.Create
	I0805 10:36:45.792808    8949 start.go:128] duration metric: took 2.444592416s to createHost
	I0805 10:36:45.792881    8949 start.go:83] releasing machines lock for "kubernetes-upgrade-234000", held for 2.444778375s
	W0805 10:36:45.793025    8949 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:36:45.811390    8949 out.go:177] * Deleting "kubernetes-upgrade-234000" in qemu2 ...
	W0805 10:36:45.845336    8949 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:36:45.845363    8949 start.go:729] Will try again in 5 seconds ...
	I0805 10:36:50.845926    8949 start.go:360] acquireMachinesLock for kubernetes-upgrade-234000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:36:50.845999    8949 start.go:364] duration metric: took 56.041µs to acquireMachinesLock for "kubernetes-upgrade-234000"
	I0805 10:36:50.846012    8949 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:36:50.846045    8949 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:36:50.855282    8949 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:36:50.870811    8949 start.go:159] libmachine.API.Create for "kubernetes-upgrade-234000" (driver="qemu2")
	I0805 10:36:50.870837    8949 client.go:168] LocalClient.Create starting
	I0805 10:36:50.870893    8949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:36:50.870923    8949 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:50.870931    8949 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:50.870966    8949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:36:50.870992    8949 main.go:141] libmachine: Decoding PEM data...
	I0805 10:36:50.871003    8949 main.go:141] libmachine: Parsing certificate...
	I0805 10:36:50.873953    8949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:36:51.023362    8949 main.go:141] libmachine: Creating SSH key...
	I0805 10:36:51.146123    8949 main.go:141] libmachine: Creating Disk image...
	I0805 10:36:51.146129    8949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:36:51.146311    8949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2
	I0805 10:36:51.155945    8949 main.go:141] libmachine: STDOUT: 
	I0805 10:36:51.155968    8949 main.go:141] libmachine: STDERR: 
	I0805 10:36:51.156018    8949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2 +20000M
	I0805 10:36:51.164182    8949 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:36:51.164196    8949 main.go:141] libmachine: STDERR: 
	I0805 10:36:51.164207    8949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2
	I0805 10:36:51.164219    8949 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:36:51.164230    8949 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:36:51.164265    8949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:54:97:d8:9c:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2
	I0805 10:36:51.165899    8949 main.go:141] libmachine: STDOUT: 
	I0805 10:36:51.165917    8949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:36:51.165931    8949 client.go:171] duration metric: took 295.094875ms to LocalClient.Create
	I0805 10:36:53.168042    8949 start.go:128] duration metric: took 2.322011s to createHost
	I0805 10:36:53.168093    8949 start.go:83] releasing machines lock for "kubernetes-upgrade-234000", held for 2.322114958s
	W0805 10:36:53.168458    8949 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-234000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:36:53.182791    8949 out.go:177] 
	W0805 10:36:53.186916    8949 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:36:53.186946    8949 out.go:239] * 
	* 
	W0805 10:36:53.189262    8949 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:36:53.198852    8949 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-234000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-234000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-234000: (2.137730666s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-234000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-234000 status --format={{.Host}}: exit status 7 (64.949792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-234000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-234000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.171606541s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-234000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-234000" primary control-plane node in "kubernetes-upgrade-234000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-234000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-234000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:36:55.446152    8999 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:36:55.446276    8999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:36:55.446279    8999 out.go:304] Setting ErrFile to fd 2...
	I0805 10:36:55.446282    8999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:36:55.446432    8999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:36:55.447461    8999 out.go:298] Setting JSON to false
	I0805 10:36:55.463335    8999 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5785,"bootTime":1722873630,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:36:55.463397    8999 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:36:55.468783    8999 out.go:177] * [kubernetes-upgrade-234000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:36:55.476729    8999 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:36:55.476763    8999 notify.go:220] Checking for updates...
	I0805 10:36:55.483715    8999 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:36:55.486690    8999 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:36:55.489738    8999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:36:55.492719    8999 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:36:55.495693    8999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:36:55.499028    8999 config.go:182] Loaded profile config "kubernetes-upgrade-234000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0805 10:36:55.499289    8999 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:36:55.503636    8999 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:36:55.510702    8999 start.go:297] selected driver: qemu2
	I0805 10:36:55.510714    8999 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-234000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:36:55.510775    8999 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:36:55.513037    8999 cni.go:84] Creating CNI manager for ""
	I0805 10:36:55.513053    8999 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:36:55.513079    8999 start.go:340] cluster config:
	{Name:kubernetes-upgrade-234000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-234000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:36:55.516494    8999 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:36:55.524272    8999 out.go:177] * Starting "kubernetes-upgrade-234000" primary control-plane node in "kubernetes-upgrade-234000" cluster
	I0805 10:36:55.527643    8999 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 10:36:55.527656    8999 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 10:36:55.527664    8999 cache.go:56] Caching tarball of preloaded images
	I0805 10:36:55.527719    8999 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:36:55.527724    8999 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 10:36:55.527769    8999 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/kubernetes-upgrade-234000/config.json ...
	I0805 10:36:55.528189    8999 start.go:360] acquireMachinesLock for kubernetes-upgrade-234000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:36:55.528225    8999 start.go:364] duration metric: took 29.334µs to acquireMachinesLock for "kubernetes-upgrade-234000"
	I0805 10:36:55.528234    8999 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:36:55.528240    8999 fix.go:54] fixHost starting: 
	I0805 10:36:55.528349    8999 fix.go:112] recreateIfNeeded on kubernetes-upgrade-234000: state=Stopped err=<nil>
	W0805 10:36:55.528356    8999 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:36:55.536638    8999 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-234000" ...
	I0805 10:36:55.539745    8999 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:36:55.539799    8999 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:54:97:d8:9c:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2
	I0805 10:36:55.541708    8999 main.go:141] libmachine: STDOUT: 
	I0805 10:36:55.541726    8999 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:36:55.541752    8999 fix.go:56] duration metric: took 13.513333ms for fixHost
	I0805 10:36:55.541755    8999 start.go:83] releasing machines lock for "kubernetes-upgrade-234000", held for 13.526208ms
	W0805 10:36:55.541762    8999 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:36:55.541785    8999 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:36:55.541789    8999 start.go:729] Will try again in 5 seconds ...
	I0805 10:37:00.541860    8999 start.go:360] acquireMachinesLock for kubernetes-upgrade-234000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:37:00.542029    8999 start.go:364] duration metric: took 137.875µs to acquireMachinesLock for "kubernetes-upgrade-234000"
	I0805 10:37:00.542072    8999 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:37:00.542077    8999 fix.go:54] fixHost starting: 
	I0805 10:37:00.542301    8999 fix.go:112] recreateIfNeeded on kubernetes-upgrade-234000: state=Stopped err=<nil>
	W0805 10:37:00.542309    8999 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:37:00.545352    8999 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-234000" ...
	I0805 10:37:00.553369    8999 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:37:00.553442    8999 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:54:97:d8:9c:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubernetes-upgrade-234000/disk.qcow2
	I0805 10:37:00.555938    8999 main.go:141] libmachine: STDOUT: 
	I0805 10:37:00.555958    8999 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:37:00.555979    8999 fix.go:56] duration metric: took 13.903208ms for fixHost
	I0805 10:37:00.555983    8999 start.go:83] releasing machines lock for "kubernetes-upgrade-234000", held for 13.948292ms
	W0805 10:37:00.556031    8999 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-234000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-234000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:37:00.563397    8999 out.go:177] 
	W0805 10:37:00.567340    8999 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:37:00.567357    8999 out.go:239] * 
	* 
	W0805 10:37:00.567807    8999 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:37:00.579359    8999 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-234000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-234000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-234000 version --output=json: exit status 1 (28.085875ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-234000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-05 10:37:00.615928 -0700 PDT m=+691.324986251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-234000 -n kubernetes-upgrade-234000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-234000 -n kubernetes-upgrade-234000: exit status 7 (30.370041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-234000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-234000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-234000
--- FAIL: TestKubernetesUpgrade (17.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (589.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2289204766 start -p stopped-upgrade-363000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2289204766 start -p stopped-upgrade-363000 --memory=2200 --vm-driver=qemu2 : (55.816813167s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2289204766 -p stopped-upgrade-363000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2289204766 -p stopped-upgrade-363000 stop: (12.093642125s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-363000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-363000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.622312667s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-363000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-363000" primary control-plane node in "stopped-upgrade-363000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-363000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:38:00.554329    9068 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:38:00.554442    9068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:38:00.554445    9068 out.go:304] Setting ErrFile to fd 2...
	I0805 10:38:00.554448    9068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:38:00.554582    9068 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:38:00.555531    9068 out.go:298] Setting JSON to false
	I0805 10:38:00.572574    9068 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5850,"bootTime":1722873630,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:38:00.572643    9068 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:38:00.577308    9068 out.go:177] * [stopped-upgrade-363000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:38:00.585346    9068 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:38:00.585447    9068 notify.go:220] Checking for updates...
	I0805 10:38:00.592282    9068 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:38:00.595297    9068 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:38:00.598343    9068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:38:00.599471    9068 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:38:00.602280    9068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:38:00.605642    9068 config.go:182] Loaded profile config "stopped-upgrade-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 10:38:00.608296    9068 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 10:38:00.611407    9068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:38:00.615311    9068 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:38:00.622404    9068 start.go:297] selected driver: qemu2
	I0805 10:38:00.622412    9068 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51187 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 10:38:00.622486    9068 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:38:00.624874    9068 cni.go:84] Creating CNI manager for ""
	I0805 10:38:00.624892    9068 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:38:00.624922    9068 start.go:340] cluster config:
	{Name:stopped-upgrade-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51187 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 10:38:00.624978    9068 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:38:00.631250    9068 out.go:177] * Starting "stopped-upgrade-363000" primary control-plane node in "stopped-upgrade-363000" cluster
	I0805 10:38:00.635317    9068 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 10:38:00.635354    9068 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0805 10:38:00.635363    9068 cache.go:56] Caching tarball of preloaded images
	I0805 10:38:00.635442    9068 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:38:00.635449    9068 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0805 10:38:00.635501    9068 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/config.json ...
	I0805 10:38:00.635855    9068 start.go:360] acquireMachinesLock for stopped-upgrade-363000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:38:00.635890    9068 start.go:364] duration metric: took 27.459µs to acquireMachinesLock for "stopped-upgrade-363000"
	I0805 10:38:00.635898    9068 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:38:00.635904    9068 fix.go:54] fixHost starting: 
	I0805 10:38:00.636014    9068 fix.go:112] recreateIfNeeded on stopped-upgrade-363000: state=Stopped err=<nil>
	W0805 10:38:00.636022    9068 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:38:00.639274    9068 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-363000" ...
	I0805 10:38:00.647315    9068 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:38:00.647390    9068 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51155-:22,hostfwd=tcp::51156-:2376,hostname=stopped-upgrade-363000 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/disk.qcow2
	I0805 10:38:00.700886    9068 main.go:141] libmachine: STDOUT: 
	I0805 10:38:00.700913    9068 main.go:141] libmachine: STDERR: 
	I0805 10:38:00.700918    9068 main.go:141] libmachine: Waiting for VM to start (ssh -p 51155 docker@127.0.0.1)...
	I0805 10:38:20.311101    9068 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/config.json ...
	I0805 10:38:20.311448    9068 machine.go:94] provisionDockerMachine start ...
	I0805 10:38:20.311539    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.311763    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.311769    9068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 10:38:20.383542    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 10:38:20.383560    9068 buildroot.go:166] provisioning hostname "stopped-upgrade-363000"
	I0805 10:38:20.383639    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.383768    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.383773    9068 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-363000 && echo "stopped-upgrade-363000" | sudo tee /etc/hostname
	I0805 10:38:20.456069    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-363000
	
	I0805 10:38:20.456121    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.456240    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.456251    9068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-363000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-363000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-363000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 10:38:20.530105    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 10:38:20.530118    9068 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19374-6507/.minikube CaCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19374-6507/.minikube}
	I0805 10:38:20.530124    9068 buildroot.go:174] setting up certificates
	I0805 10:38:20.530128    9068 provision.go:84] configureAuth start
	I0805 10:38:20.530132    9068 provision.go:143] copyHostCerts
	I0805 10:38:20.530207    9068 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem, removing ...
	I0805 10:38:20.530213    9068 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem
	I0805 10:38:20.530314    9068 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/cert.pem (1123 bytes)
	I0805 10:38:20.530513    9068 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem, removing ...
	I0805 10:38:20.530516    9068 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem
	I0805 10:38:20.530560    9068 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/key.pem (1679 bytes)
	I0805 10:38:20.530656    9068 exec_runner.go:144] found /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem, removing ...
	I0805 10:38:20.530659    9068 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem
	I0805 10:38:20.530706    9068 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.pem (1082 bytes)
	I0805 10:38:20.530807    9068 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-363000 san=[127.0.0.1 localhost minikube stopped-upgrade-363000]
	I0805 10:38:20.655365    9068 provision.go:177] copyRemoteCerts
	I0805 10:38:20.655403    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 10:38:20.655415    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	I0805 10:38:20.694554    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 10:38:20.701913    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0805 10:38:20.709138    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 10:38:20.716292    9068 provision.go:87] duration metric: took 186.161625ms to configureAuth
	I0805 10:38:20.716300    9068 buildroot.go:189] setting minikube options for container-runtime
	I0805 10:38:20.716419    9068 config.go:182] Loaded profile config "stopped-upgrade-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 10:38:20.716462    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.716556    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.716561    9068 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 10:38:20.784131    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 10:38:20.784141    9068 buildroot.go:70] root file system type: tmpfs
	I0805 10:38:20.784196    9068 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 10:38:20.784255    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.784382    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.784416    9068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 10:38:20.855853    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 10:38:20.855908    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:20.856026    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:20.856037    9068 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 10:38:21.211532    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 10:38:21.211545    9068 machine.go:97] duration metric: took 900.101583ms to provisionDockerMachine
	I0805 10:38:21.211551    9068 start.go:293] postStartSetup for "stopped-upgrade-363000" (driver="qemu2")
	I0805 10:38:21.211558    9068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 10:38:21.211616    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 10:38:21.211627    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	I0805 10:38:21.249114    9068 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 10:38:21.250534    9068 info.go:137] Remote host: Buildroot 2021.02.12
	I0805 10:38:21.250542    9068 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19374-6507/.minikube/addons for local assets ...
	I0805 10:38:21.250609    9068 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19374-6507/.minikube/files for local assets ...
	I0805 10:38:21.250694    9068 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem -> 70072.pem in /etc/ssl/certs
	I0805 10:38:21.250787    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 10:38:21.253922    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem --> /etc/ssl/certs/70072.pem (1708 bytes)
	I0805 10:38:21.261063    9068 start.go:296] duration metric: took 49.507125ms for postStartSetup
	I0805 10:38:21.261077    9068 fix.go:56] duration metric: took 20.625449958s for fixHost
	I0805 10:38:21.261110    9068 main.go:141] libmachine: Using SSH client type: native
	I0805 10:38:21.261211    9068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10063aa10] 0x10063d270 <nil>  [] 0s} localhost 51155 <nil> <nil>}
	I0805 10:38:21.261216    9068 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 10:38:21.327551    9068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722879501.820092296
	
	I0805 10:38:21.327557    9068 fix.go:216] guest clock: 1722879501.820092296
	I0805 10:38:21.327561    9068 fix.go:229] Guest: 2024-08-05 10:38:21.820092296 -0700 PDT Remote: 2024-08-05 10:38:21.261079 -0700 PDT m=+20.728287585 (delta=559.013296ms)
	I0805 10:38:21.327572    9068 fix.go:200] guest clock delta is within tolerance: 559.013296ms
	I0805 10:38:21.327575    9068 start.go:83] releasing machines lock for "stopped-upgrade-363000", held for 20.691955833s
	I0805 10:38:21.327643    9068 ssh_runner.go:195] Run: cat /version.json
	I0805 10:38:21.327650    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	I0805 10:38:21.327658    9068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 10:38:21.327676    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	W0805 10:38:21.328227    9068 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:51341->127.0.0.1:51155: write: broken pipe
	I0805 10:38:21.328242    9068 retry.go:31] will retry after 228.558254ms: ssh: handshake failed: write tcp 127.0.0.1:51341->127.0.0.1:51155: write: broken pipe
	W0805 10:38:21.363491    9068 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0805 10:38:21.363550    9068 ssh_runner.go:195] Run: systemctl --version
	I0805 10:38:21.365329    9068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 10:38:21.367015    9068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 10:38:21.367044    9068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0805 10:38:21.370052    9068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0805 10:38:21.374862    9068 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 10:38:21.374872    9068 start.go:495] detecting cgroup driver to use...
	I0805 10:38:21.374983    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 10:38:21.382646    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0805 10:38:21.385921    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 10:38:21.389129    9068 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 10:38:21.389149    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 10:38:21.392479    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 10:38:21.395674    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 10:38:21.398480    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 10:38:21.401465    9068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 10:38:21.404891    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 10:38:21.407693    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 10:38:21.410458    9068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 10:38:21.414104    9068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 10:38:21.417527    9068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 10:38:21.420678    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:21.502257    9068 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 10:38:21.514420    9068 start.go:495] detecting cgroup driver to use...
	I0805 10:38:21.514493    9068 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 10:38:21.526253    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 10:38:21.531524    9068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 10:38:21.539617    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 10:38:21.544324    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 10:38:21.548920    9068 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 10:38:21.605627    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 10:38:21.645250    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 10:38:21.651213    9068 ssh_runner.go:195] Run: which cri-dockerd
	I0805 10:38:21.652435    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 10:38:21.654966    9068 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 10:38:21.660207    9068 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 10:38:21.744530    9068 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 10:38:21.821485    9068 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 10:38:21.821539    9068 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 10:38:21.827165    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:21.894989    9068 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 10:38:23.007062    9068 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.112061542s)
	I0805 10:38:23.007167    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 10:38:23.016648    9068 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 10:38:23.022485    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 10:38:23.028005    9068 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 10:38:23.097782    9068 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 10:38:23.164188    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:23.249192    9068 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 10:38:23.256234    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 10:38:23.260859    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:23.320630    9068 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 10:38:23.362238    9068 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 10:38:23.362318    9068 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 10:38:23.364583    9068 start.go:563] Will wait 60s for crictl version
	I0805 10:38:23.364695    9068 ssh_runner.go:195] Run: which crictl
	I0805 10:38:23.366119    9068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 10:38:23.381551    9068 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0805 10:38:23.381629    9068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 10:38:23.399801    9068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 10:38:23.417691    9068 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0805 10:38:23.417761    9068 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0805 10:38:23.419375    9068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 10:38:23.423112    9068 kubeadm.go:883] updating cluster {Name:stopped-upgrade-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51187 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0805 10:38:23.423166    9068 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0805 10:38:23.423211    9068 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 10:38:23.433743    9068 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 10:38:23.433759    9068 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 10:38:23.433804    9068 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 10:38:23.437110    9068 ssh_runner.go:195] Run: which lz4
	I0805 10:38:23.438325    9068 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0805 10:38:23.439699    9068 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 10:38:23.439710    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0805 10:38:24.385779    9068 docker.go:649] duration metric: took 947.494584ms to copy over tarball
	I0805 10:38:24.385837    9068 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 10:38:25.545531    9068 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.159693958s)
	I0805 10:38:25.545545    9068 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 10:38:25.561664    9068 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 10:38:25.564722    9068 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0805 10:38:25.569358    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:25.650494    9068 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 10:38:27.301081    9068 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.650593375s)
	I0805 10:38:27.301179    9068 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 10:38:27.313820    9068 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 10:38:27.313845    9068 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0805 10:38:27.313853    9068 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0805 10:38:27.318763    9068 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:27.321015    9068 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:27.323406    9068 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:27.323563    9068 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:27.325941    9068 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0805 10:38:27.326151    9068 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:27.327665    9068 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:27.327753    9068 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:27.329088    9068 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:27.329315    9068 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0805 10:38:27.329906    9068 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:27.330595    9068 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:27.331804    9068 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:27.331890    9068 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:27.332220    9068 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:27.333209    9068 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:27.765065    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0805 10:38:27.771435    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:27.779233    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:27.789964    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:27.804316    9068 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0805 10:38:27.804338    9068 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0805 10:38:27.804342    9068 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0805 10:38:27.804349    9068 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:27.804400    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0805 10:38:27.804400    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0805 10:38:27.807484    9068 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0805 10:38:27.807501    9068 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:27.807538    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0805 10:38:27.813638    9068 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0805 10:38:27.813661    9068 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:27.813740    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0805 10:38:27.816656    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:27.822929    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0805 10:38:27.823057    9068 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0805 10:38:27.826612    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0805 10:38:27.830073    9068 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0805 10:38:27.830192    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:27.838046    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0805 10:38:27.839332    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0805 10:38:27.839801    9068 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0805 10:38:27.839818    9068 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:27.839840    9068 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0805 10:38:27.839881    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0805 10:38:27.839867    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0805 10:38:27.849090    9068 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0805 10:38:27.849113    9068 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:27.849165    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0805 10:38:27.854799    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0805 10:38:27.858677    9068 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0805 10:38:27.858698    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0805 10:38:27.860346    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0805 10:38:27.860459    9068 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0805 10:38:27.882692    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:27.890061    9068 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0805 10:38:27.890088    9068 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0805 10:38:27.890112    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0805 10:38:27.897346    9068 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0805 10:38:27.897367    9068 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:27.897420    9068 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0805 10:38:27.910287    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0805 10:38:27.910423    9068 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0805 10:38:27.915309    9068 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0805 10:38:27.915342    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0805 10:38:27.946071    9068 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0805 10:38:27.946085    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0805 10:38:28.026363    9068 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0805 10:38:28.177559    9068 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0805 10:38:28.177573    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0805 10:38:28.302613    9068 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0805 10:38:28.302717    9068 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:28.330295    9068 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0805 10:38:28.330572    9068 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0805 10:38:28.330595    9068 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:28.330644    9068 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:38:28.344934    9068 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0805 10:38:28.345057    9068 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0805 10:38:28.346357    9068 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0805 10:38:28.346373    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0805 10:38:28.375802    9068 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0805 10:38:28.375816    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0805 10:38:28.608809    9068 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0805 10:38:28.608847    9068 cache_images.go:92] duration metric: took 1.295004625s to LoadCachedImages
	W0805 10:38:28.608885    9068 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0805 10:38:28.608891    9068 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0805 10:38:28.608964    9068 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-363000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 10:38:28.609035    9068 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 10:38:28.622219    9068 cni.go:84] Creating CNI manager for ""
	I0805 10:38:28.622233    9068 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:38:28.622237    9068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 10:38:28.622246    9068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-363000 NodeName:stopped-upgrade-363000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 10:38:28.622312    9068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-363000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 10:38:28.622368    9068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0805 10:38:28.625710    9068 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 10:38:28.625735    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 10:38:28.628733    9068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0805 10:38:28.633786    9068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 10:38:28.638775    9068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0805 10:38:28.644042    9068 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0805 10:38:28.645262    9068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 10:38:28.648837    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:38:28.737338    9068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 10:38:28.743526    9068 certs.go:68] Setting up /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000 for IP: 10.0.2.15
	I0805 10:38:28.743533    9068 certs.go:194] generating shared ca certs ...
	I0805 10:38:28.743542    9068 certs.go:226] acquiring lock for ca certs: {Name:mkd94903be2cadc29e0a5fb0c61367bd1b12d51b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:28.743816    9068 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.key
	I0805 10:38:28.743874    9068 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.key
	I0805 10:38:28.743879    9068 certs.go:256] generating profile certs ...
	I0805 10:38:28.743961    9068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/client.key
	I0805 10:38:28.743975    9068 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key.98c64959
	I0805 10:38:28.743990    9068 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt.98c64959 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0805 10:38:28.804850    9068 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt.98c64959 ...
	I0805 10:38:28.804864    9068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt.98c64959: {Name:mkaa3b075e5add0a05595241adf2a23d191578fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:28.805187    9068 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key.98c64959 ...
	I0805 10:38:28.805192    9068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key.98c64959: {Name:mkccc30ab8922f1da13a0605c91820e5e1a3b3cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:28.805328    9068 certs.go:381] copying /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt.98c64959 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt
	I0805 10:38:28.805467    9068 certs.go:385] copying /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key.98c64959 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key
	I0805 10:38:28.805624    9068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/proxy-client.key
	I0805 10:38:28.805764    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007.pem (1338 bytes)
	W0805 10:38:28.805794    9068 certs.go:480] ignoring /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007_empty.pem, impossibly tiny 0 bytes
	I0805 10:38:28.805800    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca-key.pem (1675 bytes)
	I0805 10:38:28.805829    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem (1082 bytes)
	I0805 10:38:28.805857    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem (1123 bytes)
	I0805 10:38:28.805884    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/key.pem (1679 bytes)
	I0805 10:38:28.805939    9068 certs.go:484] found cert: /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem (1708 bytes)
	I0805 10:38:28.806332    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 10:38:28.813161    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0805 10:38:28.819672    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 10:38:28.826415    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 10:38:28.833459    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0805 10:38:28.840670    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 10:38:28.847325    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 10:38:28.853931    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 10:38:28.861282    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/ssl/certs/70072.pem --> /usr/share/ca-certificates/70072.pem (1708 bytes)
	I0805 10:38:28.868294    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 10:38:28.874697    9068 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/7007.pem --> /usr/share/ca-certificates/7007.pem (1338 bytes)
	I0805 10:38:28.881612    9068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 10:38:28.886628    9068 ssh_runner.go:195] Run: openssl version
	I0805 10:38:28.888367    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/70072.pem && ln -fs /usr/share/ca-certificates/70072.pem /etc/ssl/certs/70072.pem"
	I0805 10:38:28.891328    9068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/70072.pem
	I0805 10:38:28.892607    9068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 17:26 /usr/share/ca-certificates/70072.pem
	I0805 10:38:28.892626    9068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/70072.pem
	I0805 10:38:28.894282    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/70072.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 10:38:28.897629    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 10:38:28.900771    9068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:28.902098    9068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:28.902119    9068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 10:38:28.903863    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 10:38:28.906715    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7007.pem && ln -fs /usr/share/ca-certificates/7007.pem /etc/ssl/certs/7007.pem"
	I0805 10:38:28.909850    9068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7007.pem
	I0805 10:38:28.911232    9068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 17:26 /usr/share/ca-certificates/7007.pem
	I0805 10:38:28.911249    9068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7007.pem
	I0805 10:38:28.912933    9068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7007.pem /etc/ssl/certs/51391683.0"
	I0805 10:38:28.915931    9068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 10:38:28.917241    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 10:38:28.919341    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 10:38:28.921074    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 10:38:28.923380    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 10:38:28.925035    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 10:38:28.926685    9068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 10:38:28.928506    9068 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-363000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51187 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-363000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0805 10:38:28.928586    9068 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 10:38:28.938944    9068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 10:38:28.941948    9068 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 10:38:28.941959    9068 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 10:38:28.941984    9068 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 10:38:28.944709    9068 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 10:38:28.944745    9068 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-363000" does not appear in /Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:38:28.944759    9068 kubeconfig.go:62] /Users/jenkins/minikube-integration/19374-6507/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-363000" cluster setting kubeconfig missing "stopped-upgrade-363000" context setting]
	I0805 10:38:28.944931    9068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/kubeconfig: {Name:mkf52f0a49b2ae63f3d2905c5633513b3086a0af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:38:28.945614    9068 kapi.go:59] client config for stopped-upgrade-363000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/client.key", CAFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1019d02e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 10:38:28.946438    9068 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 10:38:28.949060    9068 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-363000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0805 10:38:28.949065    9068 kubeadm.go:1160] stopping kube-system containers ...
	I0805 10:38:28.949102    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 10:38:28.959580    9068 docker.go:483] Stopping containers: [91f7a199884a 6b94189c4353 e1cc9e5e2f59 3c41f12d029f 636206c34e2e a93dea7a5880 d01ea66fa9b2 0083d25943ab]
	I0805 10:38:28.959641    9068 ssh_runner.go:195] Run: docker stop 91f7a199884a 6b94189c4353 e1cc9e5e2f59 3c41f12d029f 636206c34e2e a93dea7a5880 d01ea66fa9b2 0083d25943ab
	I0805 10:38:28.970025    9068 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 10:38:28.975565    9068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 10:38:28.978278    9068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 10:38:28.978284    9068 kubeadm.go:157] found existing configuration files:
	
	I0805 10:38:28.978306    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/admin.conf
	I0805 10:38:28.981156    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 10:38:28.981177    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 10:38:28.983711    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/kubelet.conf
	I0805 10:38:28.986037    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 10:38:28.986057    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 10:38:28.988914    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/controller-manager.conf
	I0805 10:38:28.991528    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 10:38:28.991547    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 10:38:28.994050    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/scheduler.conf
	I0805 10:38:28.996874    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 10:38:28.996895    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 10:38:28.999298    9068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 10:38:29.002116    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:29.025344    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:29.436613    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:29.573780    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:29.604987    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 10:38:29.640014    9068 api_server.go:52] waiting for apiserver process to appear ...
	I0805 10:38:29.640093    9068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:30.142187    9068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:30.642216    9068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:38:30.646459    9068 api_server.go:72] duration metric: took 1.006459167s to wait for apiserver process to appear ...
	I0805 10:38:30.646467    9068 api_server.go:88] waiting for apiserver healthz status ...
	I0805 10:38:30.646477    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:35.648516    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:35.648533    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:40.648658    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:40.648686    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:45.649130    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:45.649150    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:50.649480    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:50.649510    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:38:55.649994    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:38:55.650056    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:00.650862    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:00.650912    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:05.652014    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:05.652083    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:10.653609    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:10.653657    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:15.655355    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:15.655392    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:20.657569    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:20.657608    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:25.659580    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:25.659601    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:30.661756    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:30.661925    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:30.681825    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:39:30.681931    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:30.696788    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:39:30.696874    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:30.709359    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:39:30.709431    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:30.720025    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:39:30.720098    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:30.730363    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:39:30.730445    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:30.745329    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:39:30.745396    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:30.756846    9068 logs.go:276] 0 containers: []
	W0805 10:39:30.756857    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:30.756917    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:30.767247    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:39:30.767264    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:30.767269    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:30.873738    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:39:30.873749    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:39:30.884931    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:30.884941    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:30.909323    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:39:30.909335    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:39:30.926533    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:39:30.926546    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:39:30.941507    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:39:30.941517    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:30.954116    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:39:30.954130    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:39:30.968363    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:39:30.968378    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:39:30.981650    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:39:30.981665    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:39:30.993651    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:39:30.993667    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:39:31.005359    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:39:31.005372    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:39:31.016987    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:39:31.017001    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:39:31.028500    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:31.028514    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:31.066912    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:31.066920    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:31.071142    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:39:31.071148    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:39:31.113145    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:39:31.113157    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:39:31.132599    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:39:31.132611    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:39:33.647166    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:38.647915    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:38.648236    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:38.681080    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:39:38.681202    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:38.700592    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:39:38.700695    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:38.714786    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:39:38.714858    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:38.726938    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:39:38.727017    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:38.737871    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:39:38.737936    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:38.748246    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:39:38.748312    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:38.758662    9068 logs.go:276] 0 containers: []
	W0805 10:39:38.758675    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:38.758726    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:38.769092    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:39:38.769110    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:39:38.769116    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:39:38.780862    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:39:38.780872    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:39:38.798158    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:39:38.798169    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:39:38.809973    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:39:38.809985    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:39:38.821623    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:39:38.821638    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:39:38.836141    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:39:38.836152    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:39:38.849931    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:39:38.849945    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:39:38.861428    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:38.861441    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:38.865822    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:39:38.865831    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:39:38.902527    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:39:38.902539    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:39:38.924563    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:39:38.924576    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:39:38.939044    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:38.939058    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:38.964878    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:38.964887    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:39.003525    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:39:39.003540    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:39:39.018760    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:39:39.018772    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:39:39.030689    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:39:39.030704    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:39.042847    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:39.042859    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:41.579427    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:46.581786    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:46.582042    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:46.608870    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:39:46.608999    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:46.625694    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:39:46.625784    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:46.643063    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:39:46.643142    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:46.654743    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:39:46.654811    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:46.665925    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:39:46.665994    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:46.679628    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:39:46.679698    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:46.690505    9068 logs.go:276] 0 containers: []
	W0805 10:39:46.690517    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:46.690572    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:46.700944    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:39:46.700963    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:46.700970    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:46.742158    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:39:46.742169    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:39:46.756928    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:39:46.756940    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:39:46.767948    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:39:46.767960    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:46.780970    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:46.780984    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:46.804852    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:46.804860    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:46.842792    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:46.842800    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:46.847418    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:39:46.847428    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:39:46.859399    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:39:46.859412    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:39:46.873136    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:39:46.873145    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:39:46.886742    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:39:46.886753    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:39:46.926986    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:39:46.926997    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:39:46.941111    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:39:46.941122    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:39:46.952093    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:39:46.952103    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:39:46.969768    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:39:46.969782    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:39:46.984270    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:39:46.984281    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:39:46.995637    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:39:46.995646    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:39:49.509256    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:39:54.511611    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:39:54.511756    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:39:54.522835    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:39:54.522905    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:39:54.533595    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:39:54.533663    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:39:54.544165    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:39:54.544237    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:39:54.554823    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:39:54.554899    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:39:54.564974    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:39:54.565053    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:39:54.576694    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:39:54.576770    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:39:54.586668    9068 logs.go:276] 0 containers: []
	W0805 10:39:54.586683    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:39:54.586742    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:39:54.597007    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:39:54.597025    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:39:54.597031    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:39:54.601219    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:39:54.601225    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:39:54.615271    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:39:54.615282    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:39:54.627097    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:39:54.627107    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:39:54.638953    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:39:54.638965    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:39:54.650961    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:39:54.650972    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:39:54.689978    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:39:54.689987    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:39:54.726888    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:39:54.726899    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:39:54.738431    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:39:54.738444    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:39:54.749682    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:39:54.749693    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:39:54.768612    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:39:54.768626    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:39:54.791618    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:39:54.791625    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:39:54.829917    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:39:54.829929    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:39:54.843491    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:39:54.843503    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:39:54.857576    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:39:54.857590    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:39:54.869249    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:39:54.869262    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:39:54.892369    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:39:54.892382    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:39:57.407375    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:02.409674    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:02.410040    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:02.443504    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:02.443627    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:02.473566    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:02.473645    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:02.487656    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:02.487730    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:02.503056    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:02.503121    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:02.513936    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:02.514003    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:02.527874    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:02.527942    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:02.538024    9068 logs.go:276] 0 containers: []
	W0805 10:40:02.538036    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:02.538091    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:02.548611    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:02.548628    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:02.548634    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:02.560085    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:02.560096    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:02.574195    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:02.574207    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:02.586084    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:02.586096    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:02.590431    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:02.590441    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:02.625374    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:02.625385    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:02.673172    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:02.673184    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:02.684456    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:02.684467    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:02.697498    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:02.697510    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:02.721980    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:02.721987    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:02.759699    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:02.759708    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:02.773599    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:02.773611    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:02.787186    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:02.787196    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:02.799095    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:02.799106    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:02.811061    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:02.811073    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:02.828849    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:02.828862    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:02.843000    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:02.843011    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:05.357052    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:10.359813    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:10.360066    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:10.393464    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:10.393595    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:10.416016    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:10.416119    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:10.433459    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:10.433558    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:10.447268    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:10.447348    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:10.458102    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:10.458171    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:10.468432    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:10.468501    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:10.478311    9068 logs.go:276] 0 containers: []
	W0805 10:40:10.478331    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:10.478390    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:10.494842    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:10.494862    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:10.494868    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:10.532123    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:10.532134    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:10.543523    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:10.543534    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:10.547995    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:10.548002    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:10.585609    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:10.585620    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:10.600023    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:10.600036    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:10.611764    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:10.611775    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:10.624428    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:10.624440    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:10.641661    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:10.641671    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:10.654059    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:10.654071    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:10.669507    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:10.669523    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:10.681265    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:10.681277    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:10.705641    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:10.705650    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:10.743377    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:10.743384    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:10.757094    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:10.757109    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:10.772784    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:10.772801    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:10.784053    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:10.784064    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:13.297449    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:18.299741    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:18.299949    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:18.329441    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:18.329557    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:18.346858    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:18.346946    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:18.360386    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:18.360463    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:18.371922    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:18.371997    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:18.383208    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:18.383276    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:18.394082    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:18.394156    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:18.404922    9068 logs.go:276] 0 containers: []
	W0805 10:40:18.404935    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:18.405001    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:18.420326    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:18.420342    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:18.420349    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:18.432096    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:18.432107    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:18.443821    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:18.443832    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:18.461334    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:18.461344    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:18.475821    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:18.475832    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:18.487670    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:18.487681    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:18.501506    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:18.501520    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:18.540890    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:18.540913    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:18.549708    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:18.549721    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:18.562598    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:18.562609    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:18.600141    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:18.600152    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:18.615033    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:18.615047    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:18.626695    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:18.626708    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:18.644176    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:18.644187    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:18.656062    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:18.656075    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:18.679659    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:18.679670    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:18.691566    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:18.691579    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:21.232731    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:26.233491    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:26.233834    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:26.273617    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:26.273754    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:26.296267    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:26.296376    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:26.312017    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:26.312097    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:26.328392    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:26.328465    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:26.344458    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:26.344527    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:26.355536    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:26.355607    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:26.368150    9068 logs.go:276] 0 containers: []
	W0805 10:40:26.368163    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:26.368224    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:26.379197    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:26.379218    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:26.379223    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:26.398070    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:26.398079    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:26.436386    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:26.436397    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:26.448883    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:26.448896    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:26.461101    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:26.461112    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:26.485798    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:26.485808    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:26.499822    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:26.499833    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:26.511490    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:26.511502    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:26.523367    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:26.523378    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:26.560330    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:26.560343    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:26.576700    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:26.576713    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:26.590753    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:26.590764    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:26.602550    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:26.602562    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:26.623310    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:26.623324    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:26.660540    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:26.660552    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:26.664525    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:26.664533    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:26.676804    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:26.676815    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:29.192049    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:34.194562    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:34.194851    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:34.223048    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:34.223174    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:34.240220    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:34.240308    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:34.253445    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:34.253513    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:34.265416    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:34.265482    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:34.275991    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:34.276058    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:34.286304    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:34.286375    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:34.296623    9068 logs.go:276] 0 containers: []
	W0805 10:40:34.296634    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:34.296685    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:34.306970    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:34.306989    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:34.306995    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:34.324922    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:34.324934    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:34.336758    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:34.336769    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:34.349011    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:34.349023    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:34.353223    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:34.353231    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:34.387939    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:34.387949    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:34.399418    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:34.399428    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:34.413862    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:34.413873    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:34.427944    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:34.427956    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:34.442122    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:34.442132    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:34.454200    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:34.454212    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:34.466351    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:34.466360    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:34.478074    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:34.478083    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:34.496225    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:34.496235    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:34.507572    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:34.507583    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:34.543303    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:34.543311    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:34.580285    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:34.580300    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:37.105244    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:42.107531    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:42.107811    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:42.132407    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:42.132520    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:42.149177    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:42.149266    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:42.162450    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:42.162515    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:42.173997    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:42.174060    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:42.184688    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:42.184758    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:42.195409    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:42.195470    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:42.205801    9068 logs.go:276] 0 containers: []
	W0805 10:40:42.205816    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:42.205878    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:42.216203    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:42.216218    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:42.216224    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:42.240659    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:42.240671    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:42.255213    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:42.255232    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:42.267319    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:42.267331    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:42.284221    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:42.284235    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:42.297812    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:42.297822    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:42.309774    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:42.309785    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:42.320785    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:42.320796    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:42.359812    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:42.359821    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:42.374013    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:42.374026    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:42.386940    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:42.386951    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:42.398816    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:42.398827    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:42.402902    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:42.402909    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:42.417547    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:42.417557    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:42.455339    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:42.455351    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:42.466800    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:42.466812    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:42.504471    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:42.504481    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:45.021428    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:50.023671    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:50.023838    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:50.038120    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:50.038201    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:50.049845    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:50.049906    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:50.063526    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:50.063605    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:50.073953    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:50.074015    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:50.084478    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:50.084550    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:50.095411    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:50.095472    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:50.105762    9068 logs.go:276] 0 containers: []
	W0805 10:40:50.105775    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:50.105835    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:50.116482    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:50.116501    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:50.116506    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:50.127729    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:50.127740    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:50.144918    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:50.144929    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:50.159207    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:50.159217    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:50.173342    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:50.173354    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:50.185252    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:50.185264    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:50.223925    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:50.223933    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:50.259851    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:50.259869    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:50.274701    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:50.274712    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:50.290412    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:50.290423    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:50.301586    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:50.301598    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:50.313589    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:50.313601    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:40:50.324779    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:50.324792    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:50.348056    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:50.348066    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:50.351710    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:50.351719    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:50.365342    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:50.365354    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:50.406517    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:50.406525    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:52.926854    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:40:57.928479    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:40:57.928599    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:40:57.945867    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:40:57.945962    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:40:57.959336    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:40:57.959412    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:40:57.970803    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:40:57.970872    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:40:57.981555    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:40:57.981622    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:40:57.992242    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:40:57.992322    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:40:58.002966    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:40:58.003036    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:40:58.012677    9068 logs.go:276] 0 containers: []
	W0805 10:40:58.012694    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:40:58.012750    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:40:58.023070    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:40:58.023088    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:40:58.023094    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:40:58.037182    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:40:58.037193    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:40:58.074985    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:40:58.074994    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:40:58.086752    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:40:58.086765    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:40:58.122546    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:40:58.122553    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:40:58.136131    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:40:58.136142    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:40:58.148055    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:40:58.148068    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:40:58.160047    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:40:58.160059    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:40:58.178328    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:40:58.178339    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:40:58.189701    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:40:58.189715    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:40:58.194115    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:40:58.194123    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:40:58.205727    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:40:58.205735    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:40:58.228425    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:40:58.228440    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:40:58.241532    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:40:58.241544    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:40:58.277797    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:40:58.277811    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:40:58.292440    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:40:58.292448    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:40:58.306423    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:40:58.306431    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:00.822779    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:05.824592    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:05.824985    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:05.857242    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:05.857376    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:05.883083    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:05.883166    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:05.898126    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:05.898191    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:05.909636    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:05.909709    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:05.920826    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:05.920895    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:05.933817    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:05.933887    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:05.944976    9068 logs.go:276] 0 containers: []
	W0805 10:41:05.944989    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:05.945049    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:05.956180    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:05.956204    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:05.956209    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:05.972517    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:05.972532    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:05.983805    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:05.983815    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:05.995843    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:05.995855    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:06.011924    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:06.011935    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:06.023564    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:06.023579    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:06.060067    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:06.060076    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:06.078216    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:06.078226    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:06.089656    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:06.089668    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:06.112269    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:06.112276    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:06.124456    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:06.124468    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:06.138557    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:06.138570    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:06.153924    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:06.153935    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:06.169961    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:06.169973    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:06.174436    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:06.174443    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:06.209915    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:06.209927    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:06.224179    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:06.224193    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:08.763690    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:13.766002    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:13.766173    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:13.785905    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:13.786004    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:13.800817    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:13.800897    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:13.813021    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:13.813094    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:13.825659    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:13.825730    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:13.835800    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:13.835873    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:13.846064    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:13.846126    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:13.856572    9068 logs.go:276] 0 containers: []
	W0805 10:41:13.856587    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:13.856640    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:13.871200    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:13.871219    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:13.871225    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:13.904960    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:13.904972    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:13.942136    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:13.942153    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:13.955544    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:13.955562    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:13.968234    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:13.968246    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:13.979910    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:13.979921    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:14.003007    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:14.003019    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:14.007053    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:14.007063    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:14.020576    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:14.020586    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:14.034977    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:14.034985    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:14.046064    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:14.046080    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:14.057695    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:14.057708    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:14.072336    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:14.072346    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:14.087001    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:14.087012    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:14.098651    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:14.098663    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:14.136708    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:14.136716    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:14.147782    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:14.147794    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:16.669434    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:21.671905    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:21.672106    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:21.694676    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:21.694781    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:21.709672    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:21.709753    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:21.722506    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:21.722581    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:21.733947    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:21.734019    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:21.744155    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:21.744227    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:21.754446    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:21.754520    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:21.765078    9068 logs.go:276] 0 containers: []
	W0805 10:41:21.765090    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:21.765146    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:21.775577    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:21.775596    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:21.775601    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:21.815199    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:21.815208    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:21.826681    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:21.826691    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:21.838344    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:21.838353    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:21.861629    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:21.861636    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:21.865670    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:21.865680    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:21.902501    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:21.902512    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:21.916574    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:21.916584    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:21.954328    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:21.954339    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:21.965404    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:21.965414    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:21.981344    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:21.981354    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:21.995093    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:21.995105    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:22.009568    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:22.009579    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:22.026765    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:22.026776    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:22.039373    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:22.039383    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:22.053635    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:22.053649    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:22.069131    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:22.069144    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:24.582378    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:29.585002    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:29.585308    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:29.621044    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:29.621178    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:29.639330    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:29.639425    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:29.663110    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:29.663186    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:29.678500    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:29.678583    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:29.689165    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:29.689236    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:29.699715    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:29.699787    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:29.710442    9068 logs.go:276] 0 containers: []
	W0805 10:41:29.710456    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:29.710514    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:29.720854    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:29.720874    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:29.720880    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:29.757538    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:29.757554    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:29.769482    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:29.769495    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:29.793011    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:29.793022    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:29.810129    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:29.810139    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:29.822010    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:29.822021    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:29.834559    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:29.834571    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:29.845875    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:29.845887    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:29.859451    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:29.859462    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:29.873630    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:29.873641    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:29.885621    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:29.885631    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:29.897621    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:29.897631    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:29.909563    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:29.909579    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:29.913536    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:29.913541    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:29.950275    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:29.950285    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:29.996432    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:29.996445    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:30.011592    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:30.011608    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:32.540854    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:37.543686    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:37.543898    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:37.563363    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:37.563455    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:37.580362    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:37.580438    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:37.592548    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:37.592615    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:37.603311    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:37.603380    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:37.614022    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:37.614086    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:37.624717    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:37.624788    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:37.635114    9068 logs.go:276] 0 containers: []
	W0805 10:41:37.635125    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:37.635177    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:37.645500    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:37.645517    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:37.645523    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:37.684683    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:37.684693    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:37.698277    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:37.698294    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:37.716071    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:37.716084    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:37.727146    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:37.727157    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:37.750685    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:37.750695    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:37.762654    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:37.762669    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:37.800030    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:37.800037    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:37.835253    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:37.835265    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:37.849979    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:37.849990    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:37.861450    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:37.861462    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:37.873609    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:37.873624    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:37.885831    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:37.885843    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:37.897786    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:37.897803    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:37.901988    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:37.901994    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:37.915987    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:37.916002    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:37.928149    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:37.928160    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:40.444227    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:45.446827    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:45.447131    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:45.474366    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:45.474489    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:45.494231    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:45.494326    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:45.511464    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:45.511539    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:45.522605    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:45.522680    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:45.533123    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:45.533186    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:45.543454    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:45.543516    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:45.553904    9068 logs.go:276] 0 containers: []
	W0805 10:41:45.553915    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:45.553965    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:45.563986    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:45.564005    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:45.564010    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:45.600725    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:45.600735    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:45.604913    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:45.604920    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:45.642113    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:45.642123    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:45.656574    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:45.656589    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:45.673263    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:45.673275    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:45.685278    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:45.685292    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:45.697129    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:45.697144    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:45.708607    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:45.708622    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:45.742673    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:45.742683    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:45.760176    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:45.760190    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:45.773987    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:45.773998    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:45.784934    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:45.784947    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:45.799943    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:45.799959    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:45.813651    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:45.813662    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:45.825150    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:45.825161    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:45.848146    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:45.848154    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:48.361479    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:41:53.363680    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:41:53.363771    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:41:53.374956    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:41:53.375030    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:41:53.385587    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:41:53.385658    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:41:53.398052    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:41:53.398122    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:41:53.408531    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:41:53.408599    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:41:53.419492    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:41:53.419564    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:41:53.430317    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:41:53.430384    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:41:53.440087    9068 logs.go:276] 0 containers: []
	W0805 10:41:53.440100    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:41:53.440163    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:41:53.450562    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:41:53.450578    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:41:53.450585    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:41:53.468003    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:41:53.468016    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:41:53.479598    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:41:53.479609    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:41:53.493710    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:41:53.493722    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:41:53.515417    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:41:53.515424    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:41:53.519838    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:41:53.519846    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:41:53.535778    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:41:53.535789    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:41:53.547080    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:41:53.547092    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:41:53.583631    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:41:53.583642    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:41:53.619326    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:41:53.619342    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:41:53.660987    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:41:53.660999    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:41:53.675506    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:41:53.675518    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:41:53.686995    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:41:53.687007    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:41:53.704717    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:41:53.704731    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:41:53.721109    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:41:53.721123    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:41:53.735045    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:41:53.735059    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:41:53.747160    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:41:53.747172    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:41:56.260024    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:01.262366    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:01.262548    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:01.278005    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:42:01.278077    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:01.298003    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:42:01.298076    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:01.314326    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:42:01.314391    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:01.324954    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:42:01.325027    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:01.335312    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:42:01.335376    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:01.350366    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:42:01.350439    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:01.360957    9068 logs.go:276] 0 containers: []
	W0805 10:42:01.360970    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:01.361033    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:01.371440    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:42:01.371460    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:42:01.371465    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:42:01.385308    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:42:01.385322    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:42:01.402452    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:42:01.402464    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:42:01.414446    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:01.414459    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:01.450825    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:42:01.450839    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:42:01.462465    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:42:01.462475    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:42:01.474464    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:01.474476    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:01.496509    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:42:01.496516    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:01.508176    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:01.508187    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:01.512475    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:42:01.512484    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:42:01.553396    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:42:01.553410    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:42:01.567009    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:01.567020    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:01.607119    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:42:01.607129    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:42:01.621588    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:42:01.621598    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:42:01.635910    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:42:01.635921    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:42:01.648145    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:42:01.648155    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:42:01.663393    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:42:01.663407    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:42:04.177423    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:09.179787    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:09.179968    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:09.201650    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:42:09.201732    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:09.214758    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:42:09.214829    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:09.227231    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:42:09.227296    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:09.238125    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:42:09.238190    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:09.248795    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:42:09.248856    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:09.259897    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:42:09.259967    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:09.271044    9068 logs.go:276] 0 containers: []
	W0805 10:42:09.271056    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:09.271108    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:09.281905    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:42:09.281928    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:42:09.281934    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:42:09.303172    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:42:09.303183    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:42:09.314408    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:09.314423    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:09.352880    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:09.352888    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:09.390133    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:42:09.390148    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:42:09.428286    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:42:09.428297    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:42:09.448775    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:42:09.448790    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:42:09.460827    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:42:09.460838    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:42:09.472767    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:09.472777    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:09.495851    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:09.495860    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:09.500003    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:42:09.500010    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:42:09.517044    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:42:09.517054    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:42:09.528358    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:42:09.528371    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:42:09.543900    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:42:09.543910    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:42:09.556977    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:42:09.556993    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:42:09.568509    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:42:09.568525    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:42:09.584145    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:42:09.584156    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:12.098283    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:17.101073    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:17.101296    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:17.132219    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:42:17.132340    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:17.151936    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:42:17.152020    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:17.170562    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:42:17.170633    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:17.185915    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:42:17.185975    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:17.195713    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:42:17.195777    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:17.205885    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:42:17.205958    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:17.215753    9068 logs.go:276] 0 containers: []
	W0805 10:42:17.215764    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:17.215818    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:17.226633    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:42:17.226649    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:42:17.226654    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:42:17.241383    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:42:17.241398    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:42:17.258508    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:42:17.258517    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:42:17.269571    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:17.269587    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:17.273444    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:42:17.273451    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:42:17.311008    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:42:17.311018    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:42:17.322007    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:17.322021    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:17.360243    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:42:17.360252    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:42:17.373466    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:42:17.373476    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:42:17.387706    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:42:17.387721    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:42:17.403478    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:42:17.403492    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:42:17.431676    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:42:17.431692    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:42:17.453527    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:42:17.453544    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:17.466521    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:17.466534    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:17.502355    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:42:17.502370    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:42:17.513669    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:42:17.513681    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:42:17.528158    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:17.528170    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:20.051796    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:25.054148    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:25.054276    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:42:25.069170    9068 logs.go:276] 2 containers: [399a26756750 3c41f12d029f]
	I0805 10:42:25.069256    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:42:25.081747    9068 logs.go:276] 2 containers: [0d7ebfcf0f52 636206c34e2e]
	I0805 10:42:25.081820    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:42:25.092182    9068 logs.go:276] 1 containers: [3fc287268b86]
	I0805 10:42:25.092247    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:42:25.102710    9068 logs.go:276] 2 containers: [ea53529dcbfd 91f7a199884a]
	I0805 10:42:25.102780    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:42:25.113347    9068 logs.go:276] 1 containers: [a29b2ad2a3db]
	I0805 10:42:25.113414    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:42:25.123509    9068 logs.go:276] 2 containers: [a2439666ad31 e1cc9e5e2f59]
	I0805 10:42:25.123579    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:42:25.133340    9068 logs.go:276] 0 containers: []
	W0805 10:42:25.133351    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:42:25.133400    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:42:25.148514    9068 logs.go:276] 2 containers: [e66ab34a3554 78e756f0378c]
	I0805 10:42:25.148532    9068 logs.go:123] Gathering logs for kube-controller-manager [a2439666ad31] ...
	I0805 10:42:25.148540    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2439666ad31"
	I0805 10:42:25.168474    9068 logs.go:123] Gathering logs for kube-controller-manager [e1cc9e5e2f59] ...
	I0805 10:42:25.168485    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1cc9e5e2f59"
	I0805 10:42:25.182467    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:42:25.182478    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:42:25.187226    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:42:25.187236    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:42:25.223348    9068 logs.go:123] Gathering logs for etcd [636206c34e2e] ...
	I0805 10:42:25.223361    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 636206c34e2e"
	I0805 10:42:25.238385    9068 logs.go:123] Gathering logs for kube-apiserver [399a26756750] ...
	I0805 10:42:25.238396    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 399a26756750"
	I0805 10:42:25.251677    9068 logs.go:123] Gathering logs for storage-provisioner [e66ab34a3554] ...
	I0805 10:42:25.251686    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e66ab34a3554"
	I0805 10:42:25.263633    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:42:25.263646    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:42:25.276220    9068 logs.go:123] Gathering logs for etcd [0d7ebfcf0f52] ...
	I0805 10:42:25.276230    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d7ebfcf0f52"
	I0805 10:42:25.290204    9068 logs.go:123] Gathering logs for storage-provisioner [78e756f0378c] ...
	I0805 10:42:25.290214    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78e756f0378c"
	I0805 10:42:25.301990    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:42:25.302005    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:42:25.323299    9068 logs.go:123] Gathering logs for kube-scheduler [ea53529dcbfd] ...
	I0805 10:42:25.323307    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea53529dcbfd"
	I0805 10:42:25.335352    9068 logs.go:123] Gathering logs for kube-scheduler [91f7a199884a] ...
	I0805 10:42:25.335362    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91f7a199884a"
	I0805 10:42:25.347290    9068 logs.go:123] Gathering logs for kube-proxy [a29b2ad2a3db] ...
	I0805 10:42:25.347300    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a29b2ad2a3db"
	I0805 10:42:25.359501    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:42:25.359512    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:42:25.396202    9068 logs.go:123] Gathering logs for kube-apiserver [3c41f12d029f] ...
	I0805 10:42:25.396210    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c41f12d029f"
	I0805 10:42:25.434367    9068 logs.go:123] Gathering logs for coredns [3fc287268b86] ...
	I0805 10:42:25.434378    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fc287268b86"
	I0805 10:42:27.947796    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:32.950264    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:32.950396    9068 kubeadm.go:597] duration metric: took 4m4.011681625s to restartPrimaryControlPlane
	W0805 10:42:32.950538    9068 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0805 10:42:32.950600    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0805 10:42:34.019848    9068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.069248334s)
	I0805 10:42:34.019925    9068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 10:42:34.024918    9068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 10:42:34.027655    9068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 10:42:34.030554    9068 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 10:42:34.030559    9068 kubeadm.go:157] found existing configuration files:
	
	I0805 10:42:34.030578    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/admin.conf
	I0805 10:42:34.033339    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 10:42:34.033359    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 10:42:34.036034    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/kubelet.conf
	I0805 10:42:34.038691    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 10:42:34.038715    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 10:42:34.042094    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/controller-manager.conf
	I0805 10:42:34.044763    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 10:42:34.044787    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 10:42:34.047199    9068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/scheduler.conf
	I0805 10:42:34.050277    9068 kubeadm.go:163] "https://control-plane.minikube.internal:51187" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51187 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 10:42:34.050300    9068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 10:42:34.053059    9068 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 10:42:34.070160    9068 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0805 10:42:34.070188    9068 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 10:42:34.125465    9068 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 10:42:34.125546    9068 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 10:42:34.125597    9068 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 10:42:34.174357    9068 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 10:42:34.177622    9068 out.go:204]   - Generating certificates and keys ...
	I0805 10:42:34.177654    9068 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 10:42:34.177685    9068 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 10:42:34.177794    9068 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 10:42:34.177860    9068 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0805 10:42:34.178023    9068 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0805 10:42:34.178105    9068 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0805 10:42:34.178153    9068 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0805 10:42:34.178230    9068 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0805 10:42:34.178308    9068 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 10:42:34.178350    9068 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 10:42:34.178440    9068 kubeadm.go:310] [certs] Using the existing "sa" key
	I0805 10:42:34.178513    9068 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 10:42:34.275839    9068 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 10:42:34.388908    9068 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 10:42:34.603280    9068 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 10:42:34.650264    9068 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 10:42:34.683183    9068 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 10:42:34.684663    9068 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 10:42:34.684692    9068 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 10:42:34.767116    9068 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 10:42:34.773292    9068 out.go:204]   - Booting up control plane ...
	I0805 10:42:34.773345    9068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 10:42:34.773391    9068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 10:42:34.773429    9068 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 10:42:34.773474    9068 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 10:42:34.773568    9068 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0805 10:42:39.771440    9068 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.001646 seconds
	I0805 10:42:39.771553    9068 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 10:42:39.777699    9068 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 10:42:40.296745    9068 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 10:42:40.296998    9068 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-363000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 10:42:40.803378    9068 kubeadm.go:310] [bootstrap-token] Using token: y4030r.pqrajb9g358l1ucz
	I0805 10:42:40.806226    9068 out.go:204]   - Configuring RBAC rules ...
	I0805 10:42:40.806319    9068 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 10:42:40.808688    9068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 10:42:40.811693    9068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 10:42:40.812840    9068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 10:42:40.813930    9068 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 10:42:40.815147    9068 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 10:42:40.819339    9068 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 10:42:41.007932    9068 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 10:42:41.210768    9068 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 10:42:41.211286    9068 kubeadm.go:310] 
	I0805 10:42:41.211320    9068 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 10:42:41.211324    9068 kubeadm.go:310] 
	I0805 10:42:41.211376    9068 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 10:42:41.211382    9068 kubeadm.go:310] 
	I0805 10:42:41.211413    9068 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 10:42:41.211447    9068 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 10:42:41.211472    9068 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 10:42:41.211474    9068 kubeadm.go:310] 
	I0805 10:42:41.211512    9068 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 10:42:41.211516    9068 kubeadm.go:310] 
	I0805 10:42:41.211545    9068 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 10:42:41.211548    9068 kubeadm.go:310] 
	I0805 10:42:41.211580    9068 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 10:42:41.211678    9068 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 10:42:41.211740    9068 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 10:42:41.211747    9068 kubeadm.go:310] 
	I0805 10:42:41.211809    9068 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 10:42:41.211860    9068 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 10:42:41.211863    9068 kubeadm.go:310] 
	I0805 10:42:41.211927    9068 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y4030r.pqrajb9g358l1ucz \
	I0805 10:42:41.212018    9068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11215ef01abcfb912d109f6d89af227ccae4ec1efb0dbe7ad4cd9a56e17c4c25 \
	I0805 10:42:41.212034    9068 kubeadm.go:310] 	--control-plane 
	I0805 10:42:41.212039    9068 kubeadm.go:310] 
	I0805 10:42:41.212080    9068 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 10:42:41.212083    9068 kubeadm.go:310] 
	I0805 10:42:41.212129    9068 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y4030r.pqrajb9g358l1ucz \
	I0805 10:42:41.212205    9068 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:11215ef01abcfb912d109f6d89af227ccae4ec1efb0dbe7ad4cd9a56e17c4c25 
	I0805 10:42:41.212258    9068 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 10:42:41.212263    9068 cni.go:84] Creating CNI manager for ""
	I0805 10:42:41.212272    9068 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:42:41.216445    9068 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 10:42:41.221444    9068 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 10:42:41.224474    9068 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 10:42:41.231836    9068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 10:42:41.231902    9068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 10:42:41.231941    9068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-363000 minikube.k8s.io/updated_at=2024_08_05T10_42_41_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=7ab1b4d76a5d87b75cd4b70be3ee81f93304b0ab minikube.k8s.io/name=stopped-upgrade-363000 minikube.k8s.io/primary=true
	I0805 10:42:41.276995    9068 kubeadm.go:1113] duration metric: took 45.154667ms to wait for elevateKubeSystemPrivileges
	I0805 10:42:41.277009    9068 ops.go:34] apiserver oom_adj: -16
	I0805 10:42:41.277144    9068 kubeadm.go:394] duration metric: took 4m12.352007166s to StartCluster
	I0805 10:42:41.277156    9068 settings.go:142] acquiring lock: {Name:mk1ff1cf525c2989e8f58a78ff9196d0a088a47b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:42:41.277318    9068 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:42:41.277712    9068 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/kubeconfig: {Name:mkf52f0a49b2ae63f3d2905c5633513b3086a0af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:42:41.277898    9068 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:42:41.277941    9068 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 10:42:41.277981    9068 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-363000"
	I0805 10:42:41.277993    9068 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-363000"
	I0805 10:42:41.278003    9068 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-363000"
	I0805 10:42:41.277993    9068 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-363000"
	W0805 10:42:41.278035    9068 addons.go:243] addon storage-provisioner should already be in state true
	I0805 10:42:41.278048    9068 host.go:66] Checking if "stopped-upgrade-363000" exists ...
	I0805 10:42:41.278066    9068 config.go:182] Loaded profile config "stopped-upgrade-363000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0805 10:42:41.282216    9068 out.go:177] * Verifying Kubernetes components...
	I0805 10:42:41.282850    9068 kapi.go:59] client config for stopped-upgrade-363000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/stopped-upgrade-363000/client.key", CAFile:"/Users/jenkins/minikube-integration/19374-6507/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1019d02e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 10:42:41.286694    9068 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-363000"
	W0805 10:42:41.286700    9068 addons.go:243] addon default-storageclass should already be in state true
	I0805 10:42:41.286708    9068 host.go:66] Checking if "stopped-upgrade-363000" exists ...
	I0805 10:42:41.287226    9068 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 10:42:41.287231    9068 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 10:42:41.287236    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	I0805 10:42:41.292411    9068 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 10:42:41.298462    9068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 10:42:41.304387    9068 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 10:42:41.304394    9068 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 10:42:41.304401    9068 sshutil.go:53] new ssh client: &{IP:localhost Port:51155 SSHKeyPath:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/stopped-upgrade-363000/id_rsa Username:docker}
	I0805 10:42:41.372664    9068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 10:42:41.377345    9068 api_server.go:52] waiting for apiserver process to appear ...
	I0805 10:42:41.377380    9068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 10:42:41.381138    9068 api_server.go:72] duration metric: took 103.231458ms to wait for apiserver process to appear ...
	I0805 10:42:41.381147    9068 api_server.go:88] waiting for apiserver healthz status ...
	I0805 10:42:41.381154    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:41.388452    9068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 10:42:41.419441    9068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 10:42:46.383179    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:46.383202    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:51.383375    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:51.383398    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:42:56.383616    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:42:56.383644    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:01.383949    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:01.383990    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:06.384460    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:06.384491    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:11.385091    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:11.385133    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0805 10:43:11.711287    9068 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0805 10:43:11.716193    9068 out.go:177] * Enabled addons: storage-provisioner
	I0805 10:43:11.724204    9068 addons.go:510] duration metric: took 30.446681834s for enable addons: enabled=[storage-provisioner]
	I0805 10:43:16.385949    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:16.385996    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:21.387122    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:21.387165    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:26.388484    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:26.388533    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:31.390182    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:31.390203    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:36.392268    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:36.392308    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:41.394568    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:41.394736    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:43:41.405296    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:43:41.405369    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:43:41.415758    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:43:41.415836    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:43:41.426216    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:43:41.426281    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:43:41.436723    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:43:41.436793    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:43:41.447931    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:43:41.448003    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:43:41.458351    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:43:41.458423    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:43:41.467993    9068 logs.go:276] 0 containers: []
	W0805 10:43:41.468004    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:43:41.468059    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:43:41.478551    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:43:41.478566    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:43:41.478572    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:43:41.512534    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:43:41.512558    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:43:41.528344    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:43:41.528360    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:43:41.544979    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:43:41.544990    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:43:41.556525    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:43:41.556539    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:43:41.561259    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:43:41.561268    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:43:41.601635    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:43:41.601646    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:43:41.613410    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:43:41.613423    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:43:41.624986    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:43:41.624996    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:43:41.639655    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:43:41.639665    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:43:41.651293    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:43:41.651304    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:43:41.668177    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:43:41.668187    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:43:41.679330    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:43:41.679340    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:43:44.206058    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:49.208462    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:49.208854    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:43:49.241514    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:43:49.241661    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:43:49.258087    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:43:49.258177    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:43:49.273661    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:43:49.273733    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:43:49.287903    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:43:49.287971    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:43:49.298413    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:43:49.298489    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:43:49.309809    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:43:49.309881    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:43:49.320564    9068 logs.go:276] 0 containers: []
	W0805 10:43:49.320576    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:43:49.320633    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:43:49.342645    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:43:49.342662    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:43:49.342668    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:43:49.357243    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:43:49.357253    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:43:49.368913    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:43:49.368923    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:43:49.394121    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:43:49.394133    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:43:49.405483    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:43:49.405494    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:43:49.440808    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:43:49.440820    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:43:49.478715    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:43:49.478727    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:43:49.494556    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:43:49.494568    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:43:49.509731    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:43:49.509741    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:43:49.521585    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:43:49.521597    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:43:49.539165    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:43:49.539180    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:43:49.543660    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:43:49.543666    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:43:49.557920    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:43:49.557931    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:43:52.071731    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:43:57.074145    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:43:57.074457    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:43:57.101010    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:43:57.101132    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:43:57.118584    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:43:57.118668    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:43:57.132274    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:43:57.132337    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:43:57.147396    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:43:57.147460    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:43:57.157938    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:43:57.158007    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:43:57.168348    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:43:57.168409    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:43:57.178270    9068 logs.go:276] 0 containers: []
	W0805 10:43:57.178283    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:43:57.178340    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:43:57.188831    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:43:57.188847    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:43:57.188854    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:43:57.223960    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:43:57.223974    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:43:57.237712    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:43:57.237726    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:43:57.249824    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:43:57.249840    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:43:57.264289    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:43:57.264299    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:43:57.275872    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:43:57.275886    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:43:57.295393    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:43:57.295409    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:43:57.302204    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:43:57.302213    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:43:57.316916    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:43:57.316931    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:43:57.329146    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:43:57.329157    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:43:57.346788    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:43:57.346802    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:43:57.371581    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:43:57.371589    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:43:57.382899    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:43:57.382914    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:43:59.919605    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:04.921936    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:04.922355    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:04.959004    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:04.959139    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:04.980927    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:04.981026    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:04.996835    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:44:04.996903    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:05.009255    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:05.009325    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:05.020264    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:05.020331    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:05.038397    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:05.038462    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:05.048623    9068 logs.go:276] 0 containers: []
	W0805 10:44:05.048633    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:05.048685    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:05.059403    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:05.059421    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:05.059426    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:05.077253    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:05.077265    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:05.090163    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:05.090175    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:05.125691    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:05.125700    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:05.159808    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:05.159822    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:05.171783    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:05.171796    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:05.183527    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:05.183538    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:05.194930    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:05.194943    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:05.212495    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:05.212505    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:05.224687    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:05.224702    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:05.249513    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:05.249521    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:05.253832    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:05.253841    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:05.268358    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:05.268372    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:07.784530    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:12.786844    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:12.787193    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:12.825574    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:12.825698    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:12.844854    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:12.844953    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:12.859548    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:44:12.859625    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:12.871632    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:12.871700    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:12.887566    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:12.887637    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:12.902524    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:12.902595    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:12.912751    9068 logs.go:276] 0 containers: []
	W0805 10:44:12.912769    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:12.912825    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:12.924158    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:12.924179    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:12.924184    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:12.941761    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:12.941773    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:12.965443    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:12.965457    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:13.000662    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:13.000679    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:13.005523    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:13.005531    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:13.019242    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:13.019254    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:13.031061    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:13.031075    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:13.042730    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:13.042746    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:13.054909    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:13.054921    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:13.066083    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:13.066095    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:13.103117    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:13.103128    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:13.119395    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:13.119407    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:13.134628    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:13.134639    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:15.647919    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:20.648414    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:20.648729    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:20.680606    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:20.680732    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:20.699223    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:20.699319    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:20.712868    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:44:20.712947    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:20.725052    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:20.725123    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:20.744417    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:20.744489    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:20.755606    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:20.755680    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:20.766450    9068 logs.go:276] 0 containers: []
	W0805 10:44:20.766462    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:20.766514    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:20.777025    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:20.777041    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:20.777047    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:20.781437    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:20.781445    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:20.793160    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:20.793170    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:20.807727    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:20.807736    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:20.821282    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:20.821299    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:20.839067    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:20.839077    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:20.850789    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:20.850799    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:20.862026    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:20.862037    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:20.894888    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:20.894895    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:20.909456    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:20.909466    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:20.924174    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:20.924185    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:20.939657    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:20.939667    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:20.963202    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:20.963215    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:23.498058    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:28.499833    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:28.500058    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:28.525945    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:28.526064    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:28.546739    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:28.546826    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:28.559706    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:44:28.559778    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:28.570816    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:28.570886    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:28.581469    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:28.581546    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:28.591605    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:28.591667    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:28.601196    9068 logs.go:276] 0 containers: []
	W0805 10:44:28.601208    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:28.601257    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:28.612434    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:28.612449    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:28.612456    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:28.623554    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:28.623565    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:28.634910    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:28.634921    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:28.655126    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:28.655137    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:28.666585    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:28.666596    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:28.677540    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:28.677551    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:28.700783    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:28.700790    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:28.738541    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:28.738552    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:28.752904    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:28.752918    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:28.764544    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:28.764556    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:28.785748    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:28.785763    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:28.811246    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:28.811256    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:28.846731    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:28.846741    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:31.352871    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:36.353977    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:36.354124    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:36.371505    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:36.371587    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:36.385212    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:36.385280    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:36.396468    9068 logs.go:276] 2 containers: [e2daae6ade13 1a7c8223b623]
	I0805 10:44:36.396542    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:36.407423    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:36.407491    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:36.422234    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:36.422305    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:36.432715    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:36.432786    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:36.443732    9068 logs.go:276] 0 containers: []
	W0805 10:44:36.443748    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:36.443813    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:36.454393    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:36.454410    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:36.454417    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:36.468999    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:36.469010    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:36.481045    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:36.481056    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:36.499016    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:36.499027    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:36.510189    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:36.510204    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:36.542716    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:36.542726    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:36.547845    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:36.547855    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:36.582659    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:36.582670    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:36.597342    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:36.597355    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:36.620410    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:36.620418    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:36.632298    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:36.632308    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:36.643447    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:36.643458    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:36.661003    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:36.661015    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:39.174981    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:44.177245    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:44.177539    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:44.208191    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:44.208315    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:44.226425    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:44.226519    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:44.240536    9068 logs.go:276] 3 containers: [c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:44:44.240616    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:44.252476    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:44.252548    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:44.263520    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:44.263589    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:44.274223    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:44.274287    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:44.284736    9068 logs.go:276] 0 containers: []
	W0805 10:44:44.284749    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:44.284808    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:44.295405    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:44.295424    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:44.295431    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:44.337526    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:44.337539    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:44.359831    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:44.359845    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:44.383376    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:44.383386    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:44.415138    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:44:44.415145    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:44:44.426393    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:44.426406    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:44.443357    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:44.443368    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:44.447914    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:44.447920    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:44.462239    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:44.462250    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:44.473941    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:44.473952    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:44.497300    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:44.497311    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:44.509287    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:44.509298    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:44.521072    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:44.521084    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:44.532313    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:44.532324    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:47.044948    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:52.047271    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:52.047383    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:52.058796    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:52.058865    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:52.069232    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:52.069302    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:52.080207    9068 logs.go:276] 3 containers: [c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:44:52.080277    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:52.090916    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:52.090988    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:52.101533    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:52.101599    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:52.112186    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:52.112256    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:52.123217    9068 logs.go:276] 0 containers: []
	W0805 10:44:52.123228    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:52.123285    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:52.138126    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:52.138142    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:44:52.138148    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:44:52.170651    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:44:52.170659    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:44:52.181632    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:52.181644    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:44:52.193107    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:44:52.193119    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:44:52.213846    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:44:52.213859    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:44:52.225404    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:44:52.225418    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:44:52.244227    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:44:52.244238    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:44:52.269277    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:44:52.269285    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:44:52.288509    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:44:52.288519    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:44:52.293302    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:44:52.293309    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:44:52.327262    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:44:52.327277    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:44:52.342315    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:44:52.342325    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:44:52.354195    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:44:52.354205    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:44:52.368437    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:44:52.368449    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:44:54.881438    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:44:59.881656    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:44:59.881902    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:44:59.908463    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:44:59.908574    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:44:59.925213    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:44:59.925292    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:44:59.940476    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:44:59.940564    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:44:59.951145    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:44:59.951214    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:44:59.965208    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:44:59.965281    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:44:59.975550    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:44:59.975623    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:44:59.985954    9068 logs.go:276] 0 containers: []
	W0805 10:44:59.985966    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:44:59.986020    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:44:59.996217    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:44:59.996235    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:44:59.996241    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:00.007878    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:00.007893    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:00.020214    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:00.020227    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:00.053968    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:00.053983    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:00.068089    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:00.068102    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:00.079330    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:00.079342    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:00.098345    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:00.098356    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:00.110045    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:00.110058    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:00.132552    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:00.132564    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:00.150056    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:00.150067    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:00.174573    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:00.174583    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:00.178627    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:00.178635    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:00.190549    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:00.190561    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:00.202652    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:00.202663    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:00.237509    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:00.237519    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:02.751237    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:07.753762    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:07.753892    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:07.767401    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:07.767482    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:07.778757    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:07.778821    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:07.793703    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:07.793774    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:07.803771    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:07.803841    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:07.814609    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:07.814675    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:07.825586    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:07.825649    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:07.836019    9068 logs.go:276] 0 containers: []
	W0805 10:45:07.836030    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:07.836083    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:07.851067    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:07.851084    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:07.851090    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:07.876345    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:07.876359    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:07.881617    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:07.881626    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:07.893471    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:07.893485    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:07.907633    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:07.907644    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:07.942973    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:07.942986    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:07.954445    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:07.954456    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:07.972193    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:07.972208    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:07.988911    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:07.988924    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:08.012456    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:08.012464    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:08.030407    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:08.030420    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:08.062597    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:08.062605    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:08.074372    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:08.074384    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:08.089704    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:08.089718    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:08.104327    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:08.104337    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:10.620123    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:15.622442    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:15.622589    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:15.635800    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:15.635873    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:15.646572    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:15.646635    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:15.657028    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:15.657101    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:15.667909    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:15.667976    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:15.678324    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:15.678395    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:15.689122    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:15.689192    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:15.699429    9068 logs.go:276] 0 containers: []
	W0805 10:45:15.699439    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:15.699492    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:15.710972    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:15.710991    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:15.710998    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:15.715291    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:15.715298    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:15.733024    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:15.733036    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:15.745038    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:15.745050    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:15.760967    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:15.760978    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:15.779317    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:15.779331    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:15.794025    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:15.794035    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:15.808046    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:15.808056    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:15.821565    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:15.821581    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:15.844892    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:15.844900    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:15.877504    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:15.877512    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:15.889447    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:15.889462    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:15.901485    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:15.901495    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:15.936162    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:15.936173    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:15.947815    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:15.947825    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:18.459874    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:23.462185    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:23.462401    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:23.488932    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:23.489064    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:23.507331    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:23.507434    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:23.521948    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:23.522029    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:23.538578    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:23.538780    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:23.549768    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:23.549837    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:23.560206    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:23.560266    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:23.574182    9068 logs.go:276] 0 containers: []
	W0805 10:45:23.574198    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:23.574261    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:23.585906    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:23.585926    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:23.585932    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:23.599585    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:23.599596    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:23.610942    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:23.610954    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:23.626876    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:23.626887    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:23.641147    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:23.641158    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:23.658511    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:23.658523    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:23.662588    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:23.662594    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:23.677070    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:23.677081    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:23.688826    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:23.688837    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:23.702083    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:23.702094    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:23.713764    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:23.713780    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:23.746849    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:23.746858    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:23.770246    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:23.770258    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:23.805988    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:23.806000    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:23.820690    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:23.820703    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:26.334727    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:31.337031    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:31.337197    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:31.348358    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:31.348429    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:31.359350    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:31.359428    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:31.371072    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:31.371141    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:31.386493    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:31.386563    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:31.396873    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:31.396938    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:31.407790    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:31.407849    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:31.425899    9068 logs.go:276] 0 containers: []
	W0805 10:45:31.425910    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:31.425972    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:31.436672    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:31.436688    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:31.436693    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:31.448688    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:31.448699    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:31.463282    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:31.463294    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:31.475939    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:31.475953    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:31.511180    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:31.511188    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:31.515605    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:31.515613    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:31.554088    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:31.554097    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:31.566103    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:31.566118    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:31.577609    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:31.577624    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:31.592557    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:31.592573    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:31.607955    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:31.607969    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:31.619497    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:31.619512    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:31.643332    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:31.643341    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:31.667650    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:31.667657    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:31.688041    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:31.688055    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:34.202058    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:39.204447    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:39.204572    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:39.218601    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:39.218679    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:39.229964    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:39.230043    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:39.240313    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:39.240378    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:39.250607    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:39.250669    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:39.261525    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:39.261599    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:39.272001    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:39.272072    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:39.286175    9068 logs.go:276] 0 containers: []
	W0805 10:45:39.286186    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:39.286239    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:39.296881    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:39.296899    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:39.296905    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:39.316033    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:39.316043    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:39.327543    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:39.327552    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:39.341301    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:39.341313    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:39.353372    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:39.353383    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:39.365202    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:39.365213    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:39.377231    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:39.377242    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:39.410174    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:39.410184    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:39.425065    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:39.425075    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:39.437343    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:39.437354    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:39.451390    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:39.451399    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:39.468457    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:39.468467    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:39.482174    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:39.482184    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:39.506580    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:39.506594    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:39.510846    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:39.510853    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:42.046448    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:47.048783    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:47.049016    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:47.067294    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:47.067394    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:47.082608    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:47.082680    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:47.094649    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:47.094715    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:47.104812    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:47.104872    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:47.116928    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:47.117004    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:47.127894    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:47.127961    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:47.139652    9068 logs.go:276] 0 containers: []
	W0805 10:45:47.139666    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:47.139725    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:47.151913    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:47.151927    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:47.151932    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:47.172947    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:47.172961    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:47.177479    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:47.177487    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:47.212205    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:47.212219    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:47.224011    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:47.224022    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:47.238616    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:47.238628    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:47.253088    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:47.253098    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:47.265368    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:47.265383    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:47.291009    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:47.291019    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:47.324852    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:47.324863    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:47.336960    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:47.336974    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:47.349333    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:47.349345    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:47.363986    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:47.363997    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:47.385286    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:47.385297    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:47.397137    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:47.397149    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:49.911418    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:45:54.913753    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:45:54.913922    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:45:54.930173    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:45:54.930265    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:45:54.942831    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:45:54.942901    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:45:54.954614    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:45:54.954685    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:45:54.966015    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:45:54.966084    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:45:54.976469    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:45:54.976536    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:45:54.987143    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:45:54.987214    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:45:54.997359    9068 logs.go:276] 0 containers: []
	W0805 10:45:54.997376    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:45:54.997436    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:45:55.008152    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:45:55.008168    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:45:55.008173    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:45:55.050416    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:45:55.050427    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:45:55.064554    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:45:55.064568    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:45:55.085187    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:45:55.085198    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:45:55.100649    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:45:55.100662    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:45:55.126214    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:45:55.126234    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:45:55.138150    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:45:55.138164    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:45:55.150174    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:45:55.150186    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:45:55.161964    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:45:55.161975    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:45:55.180109    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:45:55.180122    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:45:55.214157    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:45:55.214169    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:45:55.218745    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:45:55.218759    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:45:55.233608    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:45:55.233618    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:45:55.245690    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:45:55.245700    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:45:55.263860    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:45:55.263871    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:45:57.777359    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:02.779928    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:02.780359    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:02.814048    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:46:02.814180    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:02.838071    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:46:02.838192    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:02.852977    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:46:02.853050    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:02.865335    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:46:02.865410    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:02.876569    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:46:02.876643    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:02.887314    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:46:02.887376    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:02.902497    9068 logs.go:276] 0 containers: []
	W0805 10:46:02.902510    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:02.902570    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:02.913238    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:46:02.913256    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:46:02.913261    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:46:02.925265    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:46:02.925279    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:46:02.937317    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:46:02.937329    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:46:02.952549    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:46:02.952562    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:46:02.964657    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:02.964668    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:02.969124    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:46:02.969131    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:46:02.981088    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:46:02.981099    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:46:02.995938    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:02.995949    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:03.031110    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:46:03.031118    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:46:03.045200    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:46:03.045211    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:46:03.061814    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:46:03.061826    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:46:03.079923    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:46:03.079935    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:46:03.098752    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:03.098763    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:03.124185    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:46:03.124201    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:03.136228    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:03.136245    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:05.674937    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:10.677232    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:10.677520    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:10.707900    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:46:10.708030    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:10.726443    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:46:10.726542    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:10.740718    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:46:10.740802    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:10.752925    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:46:10.752993    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:10.763514    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:46:10.763587    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:10.778041    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:46:10.778110    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:10.788642    9068 logs.go:276] 0 containers: []
	W0805 10:46:10.788653    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:10.788708    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:10.799622    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:46:10.799639    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:10.799644    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:10.804690    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:46:10.804701    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:46:10.822216    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:10.822227    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:10.854798    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:46:10.854806    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:46:10.868653    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:46:10.868665    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:46:10.880134    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:46:10.880144    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:46:10.892193    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:46:10.892205    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:46:10.906684    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:46:10.906696    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:46:10.919099    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:10.919110    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:10.955351    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:46:10.955361    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:46:10.969986    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:46:10.969999    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:46:10.984094    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:46:10.984105    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:46:10.995526    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:10.995535    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:11.020868    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:46:11.020881    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:11.033709    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:46:11.033720    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:46:13.557894    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:18.560155    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:18.560286    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:18.572427    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:46:18.572496    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:18.583605    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:46:18.583674    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:18.595384    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:46:18.595449    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:18.605729    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:46:18.605789    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:18.616547    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:46:18.616600    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:18.626881    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:46:18.626935    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:18.637082    9068 logs.go:276] 0 containers: []
	W0805 10:46:18.637092    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:18.637138    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:18.647784    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:46:18.647802    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:18.647807    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:18.681099    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:46:18.681106    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:46:18.693167    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:46:18.693176    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:46:18.711778    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:46:18.711791    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:46:18.723630    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:46:18.723644    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:46:18.737744    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:46:18.737754    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:46:18.751462    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:46:18.751471    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:46:18.762718    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:18.762732    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:18.785876    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:46:18.785885    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:46:18.797508    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:46:18.797518    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:46:18.808780    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:46:18.808803    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:46:18.825016    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:46:18.825029    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:18.837732    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:18.837744    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:18.841882    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:18.841890    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:18.875616    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:46:18.875629    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:46:21.388389    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:26.390714    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:26.390888    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:26.407681    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:46:26.407760    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:26.420882    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:46:26.420952    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:26.432784    9068 logs.go:276] 4 containers: [9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:46:26.432849    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:26.443211    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:46:26.443273    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:26.453383    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:46:26.453455    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:26.463700    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:46:26.463772    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:26.474344    9068 logs.go:276] 0 containers: []
	W0805 10:46:26.474356    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:26.474415    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:26.485032    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:46:26.485052    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:46:26.485058    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:46:26.499476    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:46:26.499487    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:46:26.511052    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:46:26.511062    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:46:26.523187    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:46:26.523197    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	I0805 10:46:26.534796    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:46:26.534808    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:46:26.549855    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:46:26.549869    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:46:26.563113    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:26.563123    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:26.567346    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:26.567353    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:26.602100    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:26.602114    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:26.625101    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:46:26.625109    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:46:26.637207    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:46:26.637221    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:46:26.649267    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:26.649278    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:26.681541    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:46:26.681550    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:26.692404    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:46:26.692416    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:46:26.707520    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:46:26.707534    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:46:29.226710    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:34.228228    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:34.228527    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0805 10:46:34.246421    9068 logs.go:276] 1 containers: [ae2e3b5c46bc]
	I0805 10:46:34.246516    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0805 10:46:34.260496    9068 logs.go:276] 1 containers: [207bbc832181]
	I0805 10:46:34.260599    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0805 10:46:34.272770    9068 logs.go:276] 5 containers: [b4a7e6734dfa 9e1f4b77dc16 c00ea726514c e2daae6ade13 1a7c8223b623]
	I0805 10:46:34.272844    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0805 10:46:34.283598    9068 logs.go:276] 1 containers: [0db98522e2bf]
	I0805 10:46:34.283667    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0805 10:46:34.294242    9068 logs.go:276] 1 containers: [acf7c60c32f0]
	I0805 10:46:34.294305    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0805 10:46:34.309658    9068 logs.go:276] 1 containers: [6e0936bd3eb7]
	I0805 10:46:34.309724    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0805 10:46:34.320057    9068 logs.go:276] 0 containers: []
	W0805 10:46:34.320071    9068 logs.go:278] No container was found matching "kindnet"
	I0805 10:46:34.320133    9068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0805 10:46:34.330717    9068 logs.go:276] 1 containers: [385293316d74]
	I0805 10:46:34.330734    9068 logs.go:123] Gathering logs for kube-apiserver [ae2e3b5c46bc] ...
	I0805 10:46:34.330740    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ae2e3b5c46bc"
	I0805 10:46:34.348581    9068 logs.go:123] Gathering logs for etcd [207bbc832181] ...
	I0805 10:46:34.348592    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 207bbc832181"
	I0805 10:46:34.362522    9068 logs.go:123] Gathering logs for kubelet ...
	I0805 10:46:34.362532    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0805 10:46:34.395975    9068 logs.go:123] Gathering logs for kube-proxy [acf7c60c32f0] ...
	I0805 10:46:34.395990    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 acf7c60c32f0"
	I0805 10:46:34.413338    9068 logs.go:123] Gathering logs for container status ...
	I0805 10:46:34.413350    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0805 10:46:34.424971    9068 logs.go:123] Gathering logs for coredns [b4a7e6734dfa] ...
	I0805 10:46:34.424981    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4a7e6734dfa"
	I0805 10:46:34.436423    9068 logs.go:123] Gathering logs for coredns [9e1f4b77dc16] ...
	I0805 10:46:34.436446    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e1f4b77dc16"
	I0805 10:46:34.448656    9068 logs.go:123] Gathering logs for coredns [c00ea726514c] ...
	I0805 10:46:34.448667    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c00ea726514c"
	I0805 10:46:34.463695    9068 logs.go:123] Gathering logs for coredns [1a7c8223b623] ...
	I0805 10:46:34.463706    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a7c8223b623"
	W0805 10:46:34.474081    9068 logs.go:130] failed coredns [1a7c8223b623]: command: /bin/bash -c "docker logs --tail 400 1a7c8223b623" /bin/bash -c "docker logs --tail 400 1a7c8223b623": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 1a7c8223b623
	 output: 
	** stderr ** 
	Error: No such container: 1a7c8223b623
	
	** /stderr **
	I0805 10:46:34.474088    9068 logs.go:123] Gathering logs for kube-controller-manager [6e0936bd3eb7] ...
	I0805 10:46:34.474095    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6e0936bd3eb7"
	I0805 10:46:34.491885    9068 logs.go:123] Gathering logs for Docker ...
	I0805 10:46:34.491895    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0805 10:46:34.516452    9068 logs.go:123] Gathering logs for dmesg ...
	I0805 10:46:34.516461    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0805 10:46:34.520879    9068 logs.go:123] Gathering logs for describe nodes ...
	I0805 10:46:34.520886    9068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0805 10:46:34.557013    9068 logs.go:123] Gathering logs for coredns [e2daae6ade13] ...
	I0805 10:46:34.557025    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e2daae6ade13"
	I0805 10:46:34.568283    9068 logs.go:123] Gathering logs for kube-scheduler [0db98522e2bf] ...
	I0805 10:46:34.568295    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0db98522e2bf"
	I0805 10:46:34.584705    9068 logs.go:123] Gathering logs for storage-provisioner [385293316d74] ...
	I0805 10:46:34.584716    9068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385293316d74"
	I0805 10:46:37.104850    9068 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0805 10:46:42.107273    9068 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0805 10:46:42.111972    9068 out.go:177] 
	W0805 10:46:42.115978    9068 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0805 10:46:42.115995    9068 out.go:239] * 
	* 
	W0805 10:46:42.117203    9068 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:46:42.127901    9068 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-363000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (589.63s)

                                                
                                    
x
+
TestPause/serial/Start (9.92s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-510000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-510000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.855693708s)

                                                
                                                
-- stdout --
	* [pause-510000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-510000" primary control-plane node in "pause-510000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-510000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-510000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-510000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-510000 -n pause-510000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-510000 -n pause-510000: exit status 7 (66.591583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-510000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-542000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-542000 --driver=qemu2 : exit status 80 (9.832908625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-542000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-542000" primary control-plane node in "NoKubernetes-542000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-542000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-542000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-542000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-542000 -n NoKubernetes-542000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-542000 -n NoKubernetes-542000: exit status 7 (53.908875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-542000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-542000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-542000 --no-kubernetes --driver=qemu2 : exit status 80 (7.392157s)

                                                
                                                
-- stdout --
	* [NoKubernetes-542000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-542000
	* Restarting existing qemu2 VM for "NoKubernetes-542000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-542000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-542000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-542000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-542000 -n NoKubernetes-542000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-542000 -n NoKubernetes-542000: exit status 7 (32.371125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-542000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (7.42s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.66s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19374
- KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1943462245/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.66s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.51s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19374
- KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current233202933/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-542000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-542000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239973917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-542000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-542000
	* Restarting existing qemu2 VM for "NoKubernetes-542000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-542000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-542000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-542000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-542000 -n NoKubernetes-542000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-542000 -n NoKubernetes-542000: exit status 7 (34.301542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-542000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-542000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-542000 --driver=qemu2 : exit status 80 (5.312268917s)

                                                
                                                
-- stdout --
	* [NoKubernetes-542000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-542000
	* Restarting existing qemu2 VM for "NoKubernetes-542000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-542000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-542000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-542000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-542000 -n NoKubernetes-542000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-542000 -n NoKubernetes-542000: exit status 7 (65.097583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-542000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.906456583s)

                                                
                                                
-- stdout --
	* [auto-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-810000" primary control-plane node in "auto-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:48:22.892354    9666 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:48:22.892470    9666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:48:22.892473    9666 out.go:304] Setting ErrFile to fd 2...
	I0805 10:48:22.892476    9666 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:48:22.892606    9666 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:48:22.893628    9666 out.go:298] Setting JSON to false
	I0805 10:48:22.909801    9666 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6472,"bootTime":1722873630,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:48:22.909882    9666 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:48:22.915468    9666 out.go:177] * [auto-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:48:22.922440    9666 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:48:22.922490    9666 notify.go:220] Checking for updates...
	I0805 10:48:22.929373    9666 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:48:22.932382    9666 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:48:22.935289    9666 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:48:22.938407    9666 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:48:22.941437    9666 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:48:22.943247    9666 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:48:22.943317    9666 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:48:22.943368    9666 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:48:22.947374    9666 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:48:22.954300    9666 start.go:297] selected driver: qemu2
	I0805 10:48:22.954308    9666 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:48:22.954315    9666 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:48:22.956594    9666 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:48:22.959377    9666 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:48:22.962521    9666 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:48:22.962543    9666 cni.go:84] Creating CNI manager for ""
	I0805 10:48:22.962550    9666 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:48:22.962555    9666 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:48:22.962587    9666 start.go:340] cluster config:
	{Name:auto-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:48:22.966371    9666 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:48:22.974411    9666 out.go:177] * Starting "auto-810000" primary control-plane node in "auto-810000" cluster
	I0805 10:48:22.978404    9666 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:48:22.978421    9666 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:48:22.978434    9666 cache.go:56] Caching tarball of preloaded images
	I0805 10:48:22.978496    9666 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:48:22.978502    9666 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:48:22.978568    9666 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/auto-810000/config.json ...
	I0805 10:48:22.978586    9666 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/auto-810000/config.json: {Name:mk16e1da09378b5af0b452b9612675f1ddb485f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:48:22.978820    9666 start.go:360] acquireMachinesLock for auto-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:48:22.978855    9666 start.go:364] duration metric: took 28.792µs to acquireMachinesLock for "auto-810000"
	I0805 10:48:22.978866    9666 start.go:93] Provisioning new machine with config: &{Name:auto-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:48:22.978895    9666 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:48:22.987417    9666 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:48:23.005727    9666 start.go:159] libmachine.API.Create for "auto-810000" (driver="qemu2")
	I0805 10:48:23.005764    9666 client.go:168] LocalClient.Create starting
	I0805 10:48:23.005827    9666 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:48:23.005857    9666 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:23.005884    9666 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:23.005939    9666 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:48:23.005971    9666 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:23.005985    9666 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:23.006448    9666 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:48:23.161058    9666 main.go:141] libmachine: Creating SSH key...
	I0805 10:48:23.241988    9666 main.go:141] libmachine: Creating Disk image...
	I0805 10:48:23.241993    9666 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:48:23.242167    9666 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2
	I0805 10:48:23.251420    9666 main.go:141] libmachine: STDOUT: 
	I0805 10:48:23.251434    9666 main.go:141] libmachine: STDERR: 
	I0805 10:48:23.251486    9666 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2 +20000M
	I0805 10:48:23.259250    9666 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:48:23.259264    9666 main.go:141] libmachine: STDERR: 
	I0805 10:48:23.259278    9666 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2
	I0805 10:48:23.259283    9666 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:48:23.259293    9666 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:48:23.259323    9666 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:50:88:1b:34:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2
	I0805 10:48:23.260975    9666 main.go:141] libmachine: STDOUT: 
	I0805 10:48:23.260988    9666 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:48:23.261011    9666 client.go:171] duration metric: took 255.239292ms to LocalClient.Create
	I0805 10:48:25.263217    9666 start.go:128] duration metric: took 2.284330791s to createHost
	I0805 10:48:25.263300    9666 start.go:83] releasing machines lock for "auto-810000", held for 2.284464834s
	W0805 10:48:25.263413    9666 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:25.278403    9666 out.go:177] * Deleting "auto-810000" in qemu2 ...
	W0805 10:48:25.304847    9666 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:25.304945    9666 start.go:729] Will try again in 5 seconds ...
	I0805 10:48:30.307144    9666 start.go:360] acquireMachinesLock for auto-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:48:30.307674    9666 start.go:364] duration metric: took 359.042µs to acquireMachinesLock for "auto-810000"
	I0805 10:48:30.307763    9666 start.go:93] Provisioning new machine with config: &{Name:auto-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:48:30.308067    9666 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:48:30.323872    9666 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:48:30.373571    9666 start.go:159] libmachine.API.Create for "auto-810000" (driver="qemu2")
	I0805 10:48:30.373628    9666 client.go:168] LocalClient.Create starting
	I0805 10:48:30.373752    9666 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:48:30.373804    9666 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:30.373823    9666 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:30.373892    9666 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:48:30.373945    9666 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:30.373957    9666 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:30.374532    9666 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:48:30.535582    9666 main.go:141] libmachine: Creating SSH key...
	I0805 10:48:30.704618    9666 main.go:141] libmachine: Creating Disk image...
	I0805 10:48:30.704627    9666 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:48:30.704826    9666 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2
	I0805 10:48:30.714416    9666 main.go:141] libmachine: STDOUT: 
	I0805 10:48:30.714435    9666 main.go:141] libmachine: STDERR: 
	I0805 10:48:30.714485    9666 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2 +20000M
	I0805 10:48:30.722397    9666 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:48:30.722418    9666 main.go:141] libmachine: STDERR: 
	I0805 10:48:30.722428    9666 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2
	I0805 10:48:30.722432    9666 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:48:30.722443    9666 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:48:30.722469    9666 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:45:93:ad:63:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/auto-810000/disk.qcow2
	I0805 10:48:30.724136    9666 main.go:141] libmachine: STDOUT: 
	I0805 10:48:30.724152    9666 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:48:30.724165    9666 client.go:171] duration metric: took 350.53725ms to LocalClient.Create
	I0805 10:48:32.726323    9666 start.go:128] duration metric: took 2.418216125s to createHost
	I0805 10:48:32.726449    9666 start.go:83] releasing machines lock for "auto-810000", held for 2.418715625s
	W0805 10:48:32.726815    9666 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:32.741392    9666 out.go:177] 
	W0805 10:48:32.746523    9666 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:48:32.746555    9666 out.go:239] * 
	* 
	W0805 10:48:32.749107    9666 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:48:32.756408    9666 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.800100625s)

                                                
                                                
-- stdout --
	* [kindnet-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-810000" primary control-plane node in "kindnet-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:48:34.991441    9778 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:48:34.991587    9778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:48:34.991590    9778 out.go:304] Setting ErrFile to fd 2...
	I0805 10:48:34.991593    9778 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:48:34.991717    9778 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:48:34.992826    9778 out.go:298] Setting JSON to false
	I0805 10:48:35.008779    9778 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6484,"bootTime":1722873630,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:48:35.008858    9778 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:48:35.013908    9778 out.go:177] * [kindnet-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:48:35.020909    9778 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:48:35.020968    9778 notify.go:220] Checking for updates...
	I0805 10:48:35.027826    9778 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:48:35.030890    9778 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:48:35.033807    9778 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:48:35.036876    9778 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:48:35.039862    9778 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:48:35.043224    9778 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:48:35.043291    9778 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:48:35.043339    9778 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:48:35.047802    9778 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:48:35.053815    9778 start.go:297] selected driver: qemu2
	I0805 10:48:35.053822    9778 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:48:35.053829    9778 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:48:35.056169    9778 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:48:35.058863    9778 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:48:35.061925    9778 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:48:35.061983    9778 cni.go:84] Creating CNI manager for "kindnet"
	I0805 10:48:35.061991    9778 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 10:48:35.062037    9778 start.go:340] cluster config:
	{Name:kindnet-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:48:35.065920    9778 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:48:35.073817    9778 out.go:177] * Starting "kindnet-810000" primary control-plane node in "kindnet-810000" cluster
	I0805 10:48:35.077821    9778 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:48:35.077839    9778 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:48:35.077851    9778 cache.go:56] Caching tarball of preloaded images
	I0805 10:48:35.077917    9778 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:48:35.077923    9778 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:48:35.077987    9778 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/kindnet-810000/config.json ...
	I0805 10:48:35.077998    9778 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/kindnet-810000/config.json: {Name:mkdcc91bf4179d267639db5f547dd450a3917871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:48:35.078244    9778 start.go:360] acquireMachinesLock for kindnet-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:48:35.078277    9778 start.go:364] duration metric: took 27.542µs to acquireMachinesLock for "kindnet-810000"
	I0805 10:48:35.078288    9778 start.go:93] Provisioning new machine with config: &{Name:kindnet-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:48:35.078318    9778 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:48:35.086880    9778 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:48:35.104161    9778 start.go:159] libmachine.API.Create for "kindnet-810000" (driver="qemu2")
	I0805 10:48:35.104195    9778 client.go:168] LocalClient.Create starting
	I0805 10:48:35.104259    9778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:48:35.104287    9778 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:35.104297    9778 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:35.104331    9778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:48:35.104353    9778 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:35.104362    9778 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:35.104706    9778 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:48:35.256277    9778 main.go:141] libmachine: Creating SSH key...
	I0805 10:48:35.313926    9778 main.go:141] libmachine: Creating Disk image...
	I0805 10:48:35.313938    9778 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:48:35.314132    9778 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2
	I0805 10:48:35.324073    9778 main.go:141] libmachine: STDOUT: 
	I0805 10:48:35.324094    9778 main.go:141] libmachine: STDERR: 
	I0805 10:48:35.324150    9778 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2 +20000M
	I0805 10:48:35.332292    9778 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:48:35.332306    9778 main.go:141] libmachine: STDERR: 
	I0805 10:48:35.332322    9778 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2
	I0805 10:48:35.332329    9778 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:48:35.332342    9778 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:48:35.332365    9778 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:a1:59:88:d0:88 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2
	I0805 10:48:35.333921    9778 main.go:141] libmachine: STDOUT: 
	I0805 10:48:35.333934    9778 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:48:35.333953    9778 client.go:171] duration metric: took 229.756209ms to LocalClient.Create
	I0805 10:48:37.336100    9778 start.go:128] duration metric: took 2.257793166s to createHost
	I0805 10:48:37.336149    9778 start.go:83] releasing machines lock for "kindnet-810000", held for 2.257891833s
	W0805 10:48:37.336213    9778 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:37.350412    9778 out.go:177] * Deleting "kindnet-810000" in qemu2 ...
	W0805 10:48:37.378053    9778 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:37.378098    9778 start.go:729] Will try again in 5 seconds ...
	I0805 10:48:42.380214    9778 start.go:360] acquireMachinesLock for kindnet-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:48:42.380620    9778 start.go:364] duration metric: took 320.25µs to acquireMachinesLock for "kindnet-810000"
	I0805 10:48:42.380728    9778 start.go:93] Provisioning new machine with config: &{Name:kindnet-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:48:42.381064    9778 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:48:42.397819    9778 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:48:42.446946    9778 start.go:159] libmachine.API.Create for "kindnet-810000" (driver="qemu2")
	I0805 10:48:42.447004    9778 client.go:168] LocalClient.Create starting
	I0805 10:48:42.447138    9778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:48:42.447203    9778 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:42.447220    9778 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:42.447277    9778 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:48:42.447320    9778 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:42.447336    9778 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:42.447853    9778 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:48:42.609310    9778 main.go:141] libmachine: Creating SSH key...
	I0805 10:48:42.696333    9778 main.go:141] libmachine: Creating Disk image...
	I0805 10:48:42.696339    9778 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:48:42.696524    9778 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2
	I0805 10:48:42.705639    9778 main.go:141] libmachine: STDOUT: 
	I0805 10:48:42.705669    9778 main.go:141] libmachine: STDERR: 
	I0805 10:48:42.705713    9778 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2 +20000M
	I0805 10:48:42.713665    9778 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:48:42.713677    9778 main.go:141] libmachine: STDERR: 
	I0805 10:48:42.713695    9778 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2
	I0805 10:48:42.713699    9778 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:48:42.713710    9778 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:48:42.713737    9778 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:96:59:16:06:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kindnet-810000/disk.qcow2
	I0805 10:48:42.715371    9778 main.go:141] libmachine: STDOUT: 
	I0805 10:48:42.715385    9778 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:48:42.715406    9778 client.go:171] duration metric: took 268.3915ms to LocalClient.Create
	I0805 10:48:44.717551    9778 start.go:128] duration metric: took 2.336485084s to createHost
	I0805 10:48:44.717606    9778 start.go:83] releasing machines lock for "kindnet-810000", held for 2.336992458s
	W0805 10:48:44.718023    9778 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:44.732678    9778 out.go:177] 
	W0805 10:48:44.737817    9778 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:48:44.737842    9778 out.go:239] * 
	* 
	W0805 10:48:44.740427    9778 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:48:44.748735    9778 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.841164666s)

                                                
                                                
-- stdout --
	* [calico-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-810000" primary control-plane node in "calico-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:48:47.075626    9891 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:48:47.075743    9891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:48:47.075747    9891 out.go:304] Setting ErrFile to fd 2...
	I0805 10:48:47.075749    9891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:48:47.075888    9891 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:48:47.076895    9891 out.go:298] Setting JSON to false
	I0805 10:48:47.092794    9891 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6497,"bootTime":1722873630,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:48:47.092860    9891 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:48:47.099654    9891 out.go:177] * [calico-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:48:47.105661    9891 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:48:47.105723    9891 notify.go:220] Checking for updates...
	I0805 10:48:47.112612    9891 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:48:47.115599    9891 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:48:47.118628    9891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:48:47.120209    9891 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:48:47.123610    9891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:48:47.126901    9891 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:48:47.126966    9891 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:48:47.127011    9891 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:48:47.131403    9891 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:48:47.138659    9891 start.go:297] selected driver: qemu2
	I0805 10:48:47.138666    9891 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:48:47.138675    9891 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:48:47.140862    9891 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:48:47.144651    9891 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:48:47.147665    9891 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:48:47.147680    9891 cni.go:84] Creating CNI manager for "calico"
	I0805 10:48:47.147688    9891 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0805 10:48:47.147718    9891 start.go:340] cluster config:
	{Name:calico-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:48:47.151362    9891 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:48:47.157628    9891 out.go:177] * Starting "calico-810000" primary control-plane node in "calico-810000" cluster
	I0805 10:48:47.161629    9891 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:48:47.161645    9891 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:48:47.161658    9891 cache.go:56] Caching tarball of preloaded images
	I0805 10:48:47.161718    9891 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:48:47.161724    9891 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:48:47.161788    9891 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/calico-810000/config.json ...
	I0805 10:48:47.161799    9891 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/calico-810000/config.json: {Name:mk75ce7d0771837b0bd1b0439d509be8a50b4d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:48:47.162016    9891 start.go:360] acquireMachinesLock for calico-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:48:47.162048    9891 start.go:364] duration metric: took 27.042µs to acquireMachinesLock for "calico-810000"
	I0805 10:48:47.162059    9891 start.go:93] Provisioning new machine with config: &{Name:calico-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:48:47.162087    9891 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:48:47.169631    9891 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:48:47.187183    9891 start.go:159] libmachine.API.Create for "calico-810000" (driver="qemu2")
	I0805 10:48:47.187214    9891 client.go:168] LocalClient.Create starting
	I0805 10:48:47.187279    9891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:48:47.187316    9891 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:47.187326    9891 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:47.187365    9891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:48:47.187392    9891 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:47.187402    9891 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:47.187785    9891 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:48:47.338383    9891 main.go:141] libmachine: Creating SSH key...
	I0805 10:48:47.456377    9891 main.go:141] libmachine: Creating Disk image...
	I0805 10:48:47.456383    9891 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:48:47.456567    9891 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2
	I0805 10:48:47.466006    9891 main.go:141] libmachine: STDOUT: 
	I0805 10:48:47.466025    9891 main.go:141] libmachine: STDERR: 
	I0805 10:48:47.466076    9891 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2 +20000M
	I0805 10:48:47.473935    9891 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:48:47.473950    9891 main.go:141] libmachine: STDERR: 
	I0805 10:48:47.473973    9891 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2
	I0805 10:48:47.473978    9891 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:48:47.473995    9891 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:48:47.474023    9891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:6f:b6:ed:e6:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2
	I0805 10:48:47.475602    9891 main.go:141] libmachine: STDOUT: 
	I0805 10:48:47.475618    9891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:48:47.475642    9891 client.go:171] duration metric: took 288.428375ms to LocalClient.Create
	I0805 10:48:49.477819    9891 start.go:128] duration metric: took 2.315740916s to createHost
	I0805 10:48:49.477880    9891 start.go:83] releasing machines lock for "calico-810000", held for 2.315852834s
	W0805 10:48:49.477990    9891 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:49.489049    9891 out.go:177] * Deleting "calico-810000" in qemu2 ...
	W0805 10:48:49.520282    9891 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:49.520306    9891 start.go:729] Will try again in 5 seconds ...
	I0805 10:48:54.522436    9891 start.go:360] acquireMachinesLock for calico-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:48:54.523009    9891 start.go:364] duration metric: took 483.584µs to acquireMachinesLock for "calico-810000"
	I0805 10:48:54.523146    9891 start.go:93] Provisioning new machine with config: &{Name:calico-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:48:54.523473    9891 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:48:54.540974    9891 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:48:54.592461    9891 start.go:159] libmachine.API.Create for "calico-810000" (driver="qemu2")
	I0805 10:48:54.592511    9891 client.go:168] LocalClient.Create starting
	I0805 10:48:54.592620    9891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:48:54.592686    9891 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:54.592704    9891 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:54.592760    9891 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:48:54.592803    9891 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:54.592817    9891 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:54.593334    9891 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:48:54.756270    9891 main.go:141] libmachine: Creating SSH key...
	I0805 10:48:54.821804    9891 main.go:141] libmachine: Creating Disk image...
	I0805 10:48:54.821809    9891 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:48:54.821995    9891 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2
	I0805 10:48:54.831340    9891 main.go:141] libmachine: STDOUT: 
	I0805 10:48:54.831362    9891 main.go:141] libmachine: STDERR: 
	I0805 10:48:54.831411    9891 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2 +20000M
	I0805 10:48:54.839182    9891 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:48:54.839198    9891 main.go:141] libmachine: STDERR: 
	I0805 10:48:54.839210    9891 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2
	I0805 10:48:54.839224    9891 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:48:54.839236    9891 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:48:54.839262    9891 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:f9:6c:c2:c2:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/calico-810000/disk.qcow2
	I0805 10:48:54.840749    9891 main.go:141] libmachine: STDOUT: 
	I0805 10:48:54.840764    9891 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:48:54.840778    9891 client.go:171] duration metric: took 248.266041ms to LocalClient.Create
	I0805 10:48:56.842979    9891 start.go:128] duration metric: took 2.319488667s to createHost
	I0805 10:48:56.843058    9891 start.go:83] releasing machines lock for "calico-810000", held for 2.320052792s
	W0805 10:48:56.843482    9891 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:48:56.854248    9891 out.go:177] 
	W0805 10:48:56.863114    9891 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:48:56.863139    9891 out.go:239] * 
	* 
	W0805 10:48:56.865947    9891 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:48:56.875155    9891 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.828701s)

                                                
                                                
-- stdout --
	* [custom-flannel-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-810000" primary control-plane node in "custom-flannel-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:48:59.322167   10015 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:48:59.322310   10015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:48:59.322313   10015 out.go:304] Setting ErrFile to fd 2...
	I0805 10:48:59.322316   10015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:48:59.322421   10015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:48:59.323458   10015 out.go:298] Setting JSON to false
	I0805 10:48:59.339577   10015 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6509,"bootTime":1722873630,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:48:59.339665   10015 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:48:59.346152   10015 out.go:177] * [custom-flannel-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:48:59.352079   10015 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:48:59.352138   10015 notify.go:220] Checking for updates...
	I0805 10:48:59.359211   10015 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:48:59.362144   10015 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:48:59.365173   10015 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:48:59.368203   10015 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:48:59.369620   10015 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:48:59.373559   10015 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:48:59.373633   10015 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:48:59.373678   10015 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:48:59.378147   10015 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:48:59.383107   10015 start.go:297] selected driver: qemu2
	I0805 10:48:59.383113   10015 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:48:59.383119   10015 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:48:59.385367   10015 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:48:59.389173   10015 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:48:59.392266   10015 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:48:59.392279   10015 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0805 10:48:59.392290   10015 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0805 10:48:59.392319   10015 start.go:340] cluster config:
	{Name:custom-flannel-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:48:59.396042   10015 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:48:59.404140   10015 out.go:177] * Starting "custom-flannel-810000" primary control-plane node in "custom-flannel-810000" cluster
	I0805 10:48:59.408157   10015 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:48:59.408175   10015 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:48:59.408188   10015 cache.go:56] Caching tarball of preloaded images
	I0805 10:48:59.408289   10015 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:48:59.408295   10015 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:48:59.408357   10015 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/custom-flannel-810000/config.json ...
	I0805 10:48:59.408368   10015 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/custom-flannel-810000/config.json: {Name:mkd99b30b81a28ee8378a8fe30d0bd6db6d48951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:48:59.408600   10015 start.go:360] acquireMachinesLock for custom-flannel-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:48:59.408637   10015 start.go:364] duration metric: took 29.667µs to acquireMachinesLock for "custom-flannel-810000"
	I0805 10:48:59.408648   10015 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:48:59.408678   10015 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:48:59.417086   10015 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:48:59.434592   10015 start.go:159] libmachine.API.Create for "custom-flannel-810000" (driver="qemu2")
	I0805 10:48:59.434622   10015 client.go:168] LocalClient.Create starting
	I0805 10:48:59.434689   10015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:48:59.434721   10015 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:59.434730   10015 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:59.434766   10015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:48:59.434789   10015 main.go:141] libmachine: Decoding PEM data...
	I0805 10:48:59.434799   10015 main.go:141] libmachine: Parsing certificate...
	I0805 10:48:59.435278   10015 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:48:59.587346   10015 main.go:141] libmachine: Creating SSH key...
	I0805 10:48:59.643657   10015 main.go:141] libmachine: Creating Disk image...
	I0805 10:48:59.643662   10015 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:48:59.643854   10015 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2
	I0805 10:48:59.652951   10015 main.go:141] libmachine: STDOUT: 
	I0805 10:48:59.652970   10015 main.go:141] libmachine: STDERR: 
	I0805 10:48:59.653021   10015 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2 +20000M
	I0805 10:48:59.660917   10015 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:48:59.660935   10015 main.go:141] libmachine: STDERR: 
	I0805 10:48:59.660951   10015 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2
	I0805 10:48:59.660956   10015 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:48:59.660968   10015 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:48:59.660994   10015 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:5e:11:ad:95:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2
	I0805 10:48:59.662612   10015 main.go:141] libmachine: STDOUT: 
	I0805 10:48:59.662627   10015 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:48:59.662642   10015 client.go:171] duration metric: took 228.019ms to LocalClient.Create
	I0805 10:49:01.664836   10015 start.go:128] duration metric: took 2.25615725s to createHost
	I0805 10:49:01.664919   10015 start.go:83] releasing machines lock for "custom-flannel-810000", held for 2.256301666s
	W0805 10:49:01.665099   10015 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:01.676166   10015 out.go:177] * Deleting "custom-flannel-810000" in qemu2 ...
	W0805 10:49:01.705947   10015 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:01.705974   10015 start.go:729] Will try again in 5 seconds ...
	I0805 10:49:06.708169   10015 start.go:360] acquireMachinesLock for custom-flannel-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:49:06.708699   10015 start.go:364] duration metric: took 365.167µs to acquireMachinesLock for "custom-flannel-810000"
	I0805 10:49:06.708841   10015 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:49:06.709072   10015 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:49:06.718772   10015 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:49:06.770619   10015 start.go:159] libmachine.API.Create for "custom-flannel-810000" (driver="qemu2")
	I0805 10:49:06.770677   10015 client.go:168] LocalClient.Create starting
	I0805 10:49:06.770798   10015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:49:06.770853   10015 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:06.770869   10015 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:06.770937   10015 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:49:06.770983   10015 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:06.770994   10015 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:06.771541   10015 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:49:06.933971   10015 main.go:141] libmachine: Creating SSH key...
	I0805 10:49:07.057771   10015 main.go:141] libmachine: Creating Disk image...
	I0805 10:49:07.057780   10015 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:49:07.057973   10015 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2
	I0805 10:49:07.067255   10015 main.go:141] libmachine: STDOUT: 
	I0805 10:49:07.067276   10015 main.go:141] libmachine: STDERR: 
	I0805 10:49:07.067347   10015 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2 +20000M
	I0805 10:49:07.075253   10015 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:49:07.075269   10015 main.go:141] libmachine: STDERR: 
	I0805 10:49:07.075280   10015 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2
	I0805 10:49:07.075286   10015 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:49:07.075297   10015 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:49:07.075326   10015 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:2e:99:ec:0d:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/custom-flannel-810000/disk.qcow2
	I0805 10:49:07.076919   10015 main.go:141] libmachine: STDOUT: 
	I0805 10:49:07.076936   10015 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:49:07.076946   10015 client.go:171] duration metric: took 306.267125ms to LocalClient.Create
	I0805 10:49:09.079131   10015 start.go:128] duration metric: took 2.370044083s to createHost
	I0805 10:49:09.079212   10015 start.go:83] releasing machines lock for "custom-flannel-810000", held for 2.370516417s
	W0805 10:49:09.079611   10015 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:09.094343   10015 out.go:177] 
	W0805 10:49:09.097474   10015 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:49:09.097529   10015 out.go:239] * 
	* 
	W0805 10:49:09.099876   10015 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:49:09.108363   10015 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.956874292s)

                                                
                                                
-- stdout --
	* [false-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-810000" primary control-plane node in "false-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:49:11.508259   10138 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:49:11.508396   10138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:49:11.508399   10138 out.go:304] Setting ErrFile to fd 2...
	I0805 10:49:11.508402   10138 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:49:11.508518   10138 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:49:11.509555   10138 out.go:298] Setting JSON to false
	I0805 10:49:11.525599   10138 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6521,"bootTime":1722873630,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:49:11.525673   10138 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:49:11.538673   10138 out.go:177] * [false-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:49:11.545755   10138 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:49:11.545791   10138 notify.go:220] Checking for updates...
	I0805 10:49:11.553708   10138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:49:11.556783   10138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:49:11.559737   10138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:49:11.562748   10138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:49:11.565781   10138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:49:11.569075   10138 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:49:11.569161   10138 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:49:11.569214   10138 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:49:11.572676   10138 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:49:11.579781   10138 start.go:297] selected driver: qemu2
	I0805 10:49:11.579788   10138 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:49:11.579796   10138 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:49:11.582026   10138 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:49:11.584760   10138 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:49:11.587818   10138 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:49:11.587843   10138 cni.go:84] Creating CNI manager for "false"
	I0805 10:49:11.587883   10138 start.go:340] cluster config:
	{Name:false-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:49:11.591756   10138 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:49:11.597762   10138 out.go:177] * Starting "false-810000" primary control-plane node in "false-810000" cluster
	I0805 10:49:11.601736   10138 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:49:11.601751   10138 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:49:11.601763   10138 cache.go:56] Caching tarball of preloaded images
	I0805 10:49:11.601823   10138 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:49:11.601830   10138 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:49:11.601900   10138 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/false-810000/config.json ...
	I0805 10:49:11.601916   10138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/false-810000/config.json: {Name:mk2c3eae629fa9ffe53457c1a83be99d771d0a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:49:11.602347   10138 start.go:360] acquireMachinesLock for false-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:49:11.602382   10138 start.go:364] duration metric: took 29.208µs to acquireMachinesLock for "false-810000"
	I0805 10:49:11.602408   10138 start.go:93] Provisioning new machine with config: &{Name:false-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:49:11.602436   10138 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:49:11.610714   10138 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:49:11.628937   10138 start.go:159] libmachine.API.Create for "false-810000" (driver="qemu2")
	I0805 10:49:11.628959   10138 client.go:168] LocalClient.Create starting
	I0805 10:49:11.629033   10138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:49:11.629063   10138 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:11.629075   10138 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:11.629110   10138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:49:11.629134   10138 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:11.629143   10138 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:11.629599   10138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:49:11.780287   10138 main.go:141] libmachine: Creating SSH key...
	I0805 10:49:11.965600   10138 main.go:141] libmachine: Creating Disk image...
	I0805 10:49:11.965607   10138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:49:11.965793   10138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2
	I0805 10:49:11.975229   10138 main.go:141] libmachine: STDOUT: 
	I0805 10:49:11.975250   10138 main.go:141] libmachine: STDERR: 
	I0805 10:49:11.975300   10138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2 +20000M
	I0805 10:49:11.983084   10138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:49:11.983097   10138 main.go:141] libmachine: STDERR: 
	I0805 10:49:11.983107   10138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2
	I0805 10:49:11.983111   10138 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:49:11.983134   10138 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:49:11.983162   10138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:56:2e:8c:27:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2
	I0805 10:49:11.984751   10138 main.go:141] libmachine: STDOUT: 
	I0805 10:49:11.984764   10138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:49:11.984780   10138 client.go:171] duration metric: took 355.822ms to LocalClient.Create
	I0805 10:49:13.986932   10138 start.go:128] duration metric: took 2.38450475s to createHost
	I0805 10:49:13.986991   10138 start.go:83] releasing machines lock for "false-810000", held for 2.3846315s
	W0805 10:49:13.987069   10138 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:14.003109   10138 out.go:177] * Deleting "false-810000" in qemu2 ...
	W0805 10:49:14.029972   10138 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:14.029999   10138 start.go:729] Will try again in 5 seconds ...
	I0805 10:49:19.032183   10138 start.go:360] acquireMachinesLock for false-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:49:19.032595   10138 start.go:364] duration metric: took 314.583µs to acquireMachinesLock for "false-810000"
	I0805 10:49:19.032706   10138 start.go:93] Provisioning new machine with config: &{Name:false-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:49:19.033039   10138 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:49:19.049701   10138 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:49:19.101466   10138 start.go:159] libmachine.API.Create for "false-810000" (driver="qemu2")
	I0805 10:49:19.101511   10138 client.go:168] LocalClient.Create starting
	I0805 10:49:19.101627   10138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:49:19.101696   10138 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:19.101716   10138 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:19.101773   10138 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:49:19.101816   10138 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:19.101828   10138 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:19.102314   10138 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:49:19.264578   10138 main.go:141] libmachine: Creating SSH key...
	I0805 10:49:19.373692   10138 main.go:141] libmachine: Creating Disk image...
	I0805 10:49:19.373697   10138 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:49:19.373882   10138 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2
	I0805 10:49:19.383127   10138 main.go:141] libmachine: STDOUT: 
	I0805 10:49:19.383147   10138 main.go:141] libmachine: STDERR: 
	I0805 10:49:19.383193   10138 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2 +20000M
	I0805 10:49:19.390984   10138 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:49:19.391000   10138 main.go:141] libmachine: STDERR: 
	I0805 10:49:19.391011   10138 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2
	I0805 10:49:19.391014   10138 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:49:19.391025   10138 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:49:19.391067   10138 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:be:00:4d:2d:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/false-810000/disk.qcow2
	I0805 10:49:19.392672   10138 main.go:141] libmachine: STDOUT: 
	I0805 10:49:19.392688   10138 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:49:19.392701   10138 client.go:171] duration metric: took 291.186834ms to LocalClient.Create
	I0805 10:49:21.394855   10138 start.go:128] duration metric: took 2.361810375s to createHost
	I0805 10:49:21.394909   10138 start.go:83] releasing machines lock for "false-810000", held for 2.362320709s
	W0805 10:49:21.395342   10138 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:21.404941   10138 out.go:177] 
	W0805 10:49:21.411997   10138 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:49:21.412040   10138 out.go:239] * 
	* 
	W0805 10:49:21.414644   10138 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:49:21.422930   10138 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.914756958s)

                                                
                                                
-- stdout --
	* [enable-default-cni-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-810000" primary control-plane node in "enable-default-cni-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:49:23.560273   10248 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:49:23.560430   10248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:49:23.560433   10248 out.go:304] Setting ErrFile to fd 2...
	I0805 10:49:23.560439   10248 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:49:23.560559   10248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:49:23.561615   10248 out.go:298] Setting JSON to false
	I0805 10:49:23.577607   10248 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6533,"bootTime":1722873630,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:49:23.577681   10248 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:49:23.583095   10248 out.go:177] * [enable-default-cni-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:49:23.590075   10248 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:49:23.590169   10248 notify.go:220] Checking for updates...
	I0805 10:49:23.597111   10248 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:49:23.600092   10248 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:49:23.603064   10248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:49:23.606065   10248 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:49:23.614127   10248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:49:23.617291   10248 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:49:23.617365   10248 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:49:23.617409   10248 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:49:23.622106   10248 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:49:23.629039   10248 start.go:297] selected driver: qemu2
	I0805 10:49:23.629044   10248 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:49:23.629049   10248 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:49:23.631303   10248 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:49:23.634065   10248 out.go:177] * Automatically selected the socket_vmnet network
	E0805 10:49:23.637202   10248 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0805 10:49:23.637213   10248 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:49:23.637235   10248 cni.go:84] Creating CNI manager for "bridge"
	I0805 10:49:23.637245   10248 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:49:23.637276   10248 start.go:340] cluster config:
	{Name:enable-default-cni-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:49:23.640979   10248 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:49:23.649083   10248 out.go:177] * Starting "enable-default-cni-810000" primary control-plane node in "enable-default-cni-810000" cluster
	I0805 10:49:23.652944   10248 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:49:23.652963   10248 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:49:23.652977   10248 cache.go:56] Caching tarball of preloaded images
	I0805 10:49:23.653048   10248 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:49:23.653054   10248 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:49:23.653134   10248 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/enable-default-cni-810000/config.json ...
	I0805 10:49:23.653149   10248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/enable-default-cni-810000/config.json: {Name:mk3663aea422dd46d9ac54f7fb5c2092bda76eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:49:23.653591   10248 start.go:360] acquireMachinesLock for enable-default-cni-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:49:23.653629   10248 start.go:364] duration metric: took 30.5µs to acquireMachinesLock for "enable-default-cni-810000"
	I0805 10:49:23.653640   10248 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:49:23.653677   10248 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:49:23.658093   10248 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:49:23.675741   10248 start.go:159] libmachine.API.Create for "enable-default-cni-810000" (driver="qemu2")
	I0805 10:49:23.675770   10248 client.go:168] LocalClient.Create starting
	I0805 10:49:23.675829   10248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:49:23.675861   10248 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:23.675871   10248 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:23.675916   10248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:49:23.675939   10248 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:23.675948   10248 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:23.676341   10248 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:49:23.828396   10248 main.go:141] libmachine: Creating SSH key...
	I0805 10:49:23.904017   10248 main.go:141] libmachine: Creating Disk image...
	I0805 10:49:23.904022   10248 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:49:23.904205   10248 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2
	I0805 10:49:23.913193   10248 main.go:141] libmachine: STDOUT: 
	I0805 10:49:23.913206   10248 main.go:141] libmachine: STDERR: 
	I0805 10:49:23.913255   10248 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2 +20000M
	I0805 10:49:23.920962   10248 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:49:23.920975   10248 main.go:141] libmachine: STDERR: 
	I0805 10:49:23.920989   10248 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2
	I0805 10:49:23.920997   10248 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:49:23.921009   10248 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:49:23.921036   10248 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:df:84:72:ce:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2
	I0805 10:49:23.922596   10248 main.go:141] libmachine: STDOUT: 
	I0805 10:49:23.922609   10248 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:49:23.922628   10248 client.go:171] duration metric: took 246.853583ms to LocalClient.Create
	I0805 10:49:25.924778   10248 start.go:128] duration metric: took 2.271108625s to createHost
	I0805 10:49:25.924840   10248 start.go:83] releasing machines lock for "enable-default-cni-810000", held for 2.271231708s
	W0805 10:49:25.924983   10248 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:25.936045   10248 out.go:177] * Deleting "enable-default-cni-810000" in qemu2 ...
	W0805 10:49:25.967139   10248 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:25.967168   10248 start.go:729] Will try again in 5 seconds ...
	I0805 10:49:30.969476   10248 start.go:360] acquireMachinesLock for enable-default-cni-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:49:30.969927   10248 start.go:364] duration metric: took 335.25µs to acquireMachinesLock for "enable-default-cni-810000"
	I0805 10:49:30.970036   10248 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:49:30.970343   10248 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:49:30.976004   10248 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:49:31.027140   10248 start.go:159] libmachine.API.Create for "enable-default-cni-810000" (driver="qemu2")
	I0805 10:49:31.027202   10248 client.go:168] LocalClient.Create starting
	I0805 10:49:31.027317   10248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:49:31.027381   10248 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:31.027398   10248 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:31.027454   10248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:49:31.027508   10248 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:31.027518   10248 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:31.028587   10248 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:49:31.199398   10248 main.go:141] libmachine: Creating SSH key...
	I0805 10:49:31.380283   10248 main.go:141] libmachine: Creating Disk image...
	I0805 10:49:31.380289   10248 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:49:31.380484   10248 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2
	I0805 10:49:31.390108   10248 main.go:141] libmachine: STDOUT: 
	I0805 10:49:31.390139   10248 main.go:141] libmachine: STDERR: 
	I0805 10:49:31.390186   10248 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2 +20000M
	I0805 10:49:31.398008   10248 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:49:31.398022   10248 main.go:141] libmachine: STDERR: 
	I0805 10:49:31.398040   10248 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2
	I0805 10:49:31.398045   10248 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:49:31.398055   10248 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:49:31.398085   10248 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:61:96:12:57:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/enable-default-cni-810000/disk.qcow2
	I0805 10:49:31.399604   10248 main.go:141] libmachine: STDOUT: 
	I0805 10:49:31.399620   10248 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:49:31.399638   10248 client.go:171] duration metric: took 372.435292ms to LocalClient.Create
	I0805 10:49:33.401783   10248 start.go:128] duration metric: took 2.431444833s to createHost
	I0805 10:49:33.401842   10248 start.go:83] releasing machines lock for "enable-default-cni-810000", held for 2.431924375s
	W0805 10:49:33.402200   10248 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:33.415868   10248 out.go:177] 
	W0805 10:49:33.420127   10248 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:49:33.420153   10248 out.go:239] * 
	* 
	W0805 10:49:33.427079   10248 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:49:33.433887   10248 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.816173958s)

                                                
                                                
-- stdout --
	* [flannel-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-810000" primary control-plane node in "flannel-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:49:35.554753   10357 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:49:35.554874   10357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:49:35.554878   10357 out.go:304] Setting ErrFile to fd 2...
	I0805 10:49:35.554880   10357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:49:35.555009   10357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:49:35.556085   10357 out.go:298] Setting JSON to false
	I0805 10:49:35.572327   10357 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6545,"bootTime":1722873630,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:49:35.572387   10357 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:49:35.579194   10357 out.go:177] * [flannel-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:49:35.586103   10357 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:49:35.586145   10357 notify.go:220] Checking for updates...
	I0805 10:49:35.593086   10357 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:49:35.596099   10357 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:49:35.599102   10357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:49:35.602148   10357 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:49:35.605017   10357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:49:35.608463   10357 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:49:35.608531   10357 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:49:35.608571   10357 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:49:35.613093   10357 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:49:35.620105   10357 start.go:297] selected driver: qemu2
	I0805 10:49:35.620111   10357 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:49:35.620120   10357 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:49:35.622363   10357 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:49:35.626164   10357 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:49:35.629166   10357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:49:35.629196   10357 cni.go:84] Creating CNI manager for "flannel"
	I0805 10:49:35.629204   10357 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0805 10:49:35.629238   10357 start.go:340] cluster config:
	{Name:flannel-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:49:35.632770   10357 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:49:35.640118   10357 out.go:177] * Starting "flannel-810000" primary control-plane node in "flannel-810000" cluster
	I0805 10:49:35.644131   10357 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:49:35.644148   10357 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:49:35.644162   10357 cache.go:56] Caching tarball of preloaded images
	I0805 10:49:35.644241   10357 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:49:35.644247   10357 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:49:35.644312   10357 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/flannel-810000/config.json ...
	I0805 10:49:35.644323   10357 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/flannel-810000/config.json: {Name:mk8fc20b13143c70f77cf237da474efae4464dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:49:35.644557   10357 start.go:360] acquireMachinesLock for flannel-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:49:35.644600   10357 start.go:364] duration metric: took 37.125µs to acquireMachinesLock for "flannel-810000"
	I0805 10:49:35.644611   10357 start.go:93] Provisioning new machine with config: &{Name:flannel-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:49:35.644643   10357 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:49:35.653089   10357 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:49:35.670971   10357 start.go:159] libmachine.API.Create for "flannel-810000" (driver="qemu2")
	I0805 10:49:35.671002   10357 client.go:168] LocalClient.Create starting
	I0805 10:49:35.671074   10357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:49:35.671107   10357 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:35.671117   10357 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:35.671153   10357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:49:35.671177   10357 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:35.671184   10357 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:35.671617   10357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:49:35.823519   10357 main.go:141] libmachine: Creating SSH key...
	I0805 10:49:35.897188   10357 main.go:141] libmachine: Creating Disk image...
	I0805 10:49:35.897199   10357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:49:35.897377   10357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2
	I0805 10:49:35.906353   10357 main.go:141] libmachine: STDOUT: 
	I0805 10:49:35.906371   10357 main.go:141] libmachine: STDERR: 
	I0805 10:49:35.906416   10357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2 +20000M
	I0805 10:49:35.914171   10357 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:49:35.914194   10357 main.go:141] libmachine: STDERR: 
	I0805 10:49:35.914207   10357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2
	I0805 10:49:35.914212   10357 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:49:35.914220   10357 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:49:35.914246   10357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:a2:c5:a6:be:c6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2
	I0805 10:49:35.915832   10357 main.go:141] libmachine: STDOUT: 
	I0805 10:49:35.915844   10357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:49:35.915868   10357 client.go:171] duration metric: took 244.864791ms to LocalClient.Create
	I0805 10:49:37.918055   10357 start.go:128] duration metric: took 2.273414s to createHost
	I0805 10:49:37.918128   10357 start.go:83] releasing machines lock for "flannel-810000", held for 2.273547542s
	W0805 10:49:37.918198   10357 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:37.933417   10357 out.go:177] * Deleting "flannel-810000" in qemu2 ...
	W0805 10:49:37.962619   10357 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:37.962645   10357 start.go:729] Will try again in 5 seconds ...
	I0805 10:49:42.964887   10357 start.go:360] acquireMachinesLock for flannel-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:49:42.965462   10357 start.go:364] duration metric: took 461.708µs to acquireMachinesLock for "flannel-810000"
	I0805 10:49:42.965617   10357 start.go:93] Provisioning new machine with config: &{Name:flannel-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:49:42.965906   10357 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:49:42.971689   10357 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:49:43.020996   10357 start.go:159] libmachine.API.Create for "flannel-810000" (driver="qemu2")
	I0805 10:49:43.021049   10357 client.go:168] LocalClient.Create starting
	I0805 10:49:43.021172   10357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:49:43.021232   10357 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:43.021246   10357 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:43.021329   10357 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:49:43.021375   10357 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:43.021398   10357 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:43.021920   10357 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:49:43.183724   10357 main.go:141] libmachine: Creating SSH key...
	I0805 10:49:43.274478   10357 main.go:141] libmachine: Creating Disk image...
	I0805 10:49:43.274486   10357 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:49:43.274676   10357 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2
	I0805 10:49:43.284076   10357 main.go:141] libmachine: STDOUT: 
	I0805 10:49:43.284096   10357 main.go:141] libmachine: STDERR: 
	I0805 10:49:43.284136   10357 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2 +20000M
	I0805 10:49:43.291933   10357 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:49:43.291946   10357 main.go:141] libmachine: STDERR: 
	I0805 10:49:43.291957   10357 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2
	I0805 10:49:43.291963   10357 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:49:43.291977   10357 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:49:43.292013   10357 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:eb:cb:16:eb:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/flannel-810000/disk.qcow2
	I0805 10:49:43.293647   10357 main.go:141] libmachine: STDOUT: 
	I0805 10:49:43.293661   10357 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:49:43.293673   10357 client.go:171] duration metric: took 272.6215ms to LocalClient.Create
	I0805 10:49:45.295828   10357 start.go:128] duration metric: took 2.32991125s to createHost
	I0805 10:49:45.295879   10357 start.go:83] releasing machines lock for "flannel-810000", held for 2.330424s
	W0805 10:49:45.296231   10357 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:45.309859   10357 out.go:177] 
	W0805 10:49:45.314048   10357 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:49:45.314077   10357 out.go:239] * 
	* 
	W0805 10:49:45.316622   10357 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:49:45.327893   10357 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.765977667s)

                                                
                                                
-- stdout --
	* [bridge-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-810000" primary control-plane node in "bridge-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:49:47.647739   10478 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:49:47.647876   10478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:49:47.647879   10478 out.go:304] Setting ErrFile to fd 2...
	I0805 10:49:47.647882   10478 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:49:47.648007   10478 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:49:47.649078   10478 out.go:298] Setting JSON to false
	I0805 10:49:47.665341   10478 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6557,"bootTime":1722873630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:49:47.665456   10478 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:49:47.672132   10478 out.go:177] * [bridge-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:49:47.679087   10478 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:49:47.679147   10478 notify.go:220] Checking for updates...
	I0805 10:49:47.686067   10478 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:49:47.689082   10478 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:49:47.692070   10478 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:49:47.695120   10478 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:49:47.698057   10478 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:49:47.701451   10478 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:49:47.701523   10478 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:49:47.701578   10478 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:49:47.706064   10478 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:49:47.713015   10478 start.go:297] selected driver: qemu2
	I0805 10:49:47.713021   10478 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:49:47.713029   10478 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:49:47.715465   10478 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:49:47.719083   10478 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:49:47.722119   10478 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:49:47.722139   10478 cni.go:84] Creating CNI manager for "bridge"
	I0805 10:49:47.722142   10478 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:49:47.722174   10478 start.go:340] cluster config:
	{Name:bridge-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:49:47.725914   10478 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:49:47.734244   10478 out.go:177] * Starting "bridge-810000" primary control-plane node in "bridge-810000" cluster
	I0805 10:49:47.738053   10478 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:49:47.738072   10478 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:49:47.738089   10478 cache.go:56] Caching tarball of preloaded images
	I0805 10:49:47.738157   10478 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:49:47.738169   10478 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:49:47.738237   10478 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/bridge-810000/config.json ...
	I0805 10:49:47.738249   10478 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/bridge-810000/config.json: {Name:mkfe28922f09b480a6a700bba19d804ea9efc4a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:49:47.738484   10478 start.go:360] acquireMachinesLock for bridge-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:49:47.738520   10478 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "bridge-810000"
	I0805 10:49:47.738534   10478 start.go:93] Provisioning new machine with config: &{Name:bridge-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:49:47.738562   10478 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:49:47.747093   10478 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:49:47.765938   10478 start.go:159] libmachine.API.Create for "bridge-810000" (driver="qemu2")
	I0805 10:49:47.765967   10478 client.go:168] LocalClient.Create starting
	I0805 10:49:47.766030   10478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:49:47.766062   10478 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:47.766078   10478 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:47.766116   10478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:49:47.766140   10478 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:47.766152   10478 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:47.766489   10478 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:49:47.919174   10478 main.go:141] libmachine: Creating SSH key...
	I0805 10:49:47.966054   10478 main.go:141] libmachine: Creating Disk image...
	I0805 10:49:47.966059   10478 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:49:47.966241   10478 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2
	I0805 10:49:47.975218   10478 main.go:141] libmachine: STDOUT: 
	I0805 10:49:47.975236   10478 main.go:141] libmachine: STDERR: 
	I0805 10:49:47.975281   10478 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2 +20000M
	I0805 10:49:47.983061   10478 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:49:47.983078   10478 main.go:141] libmachine: STDERR: 
	I0805 10:49:47.983091   10478 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2
	I0805 10:49:47.983096   10478 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:49:47.983106   10478 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:49:47.983139   10478 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:13:95:9f:6f:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2
	I0805 10:49:47.984738   10478 main.go:141] libmachine: STDOUT: 
	I0805 10:49:47.984753   10478 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:49:47.984773   10478 client.go:171] duration metric: took 218.804625ms to LocalClient.Create
	I0805 10:49:49.986962   10478 start.go:128] duration metric: took 2.248394125s to createHost
	I0805 10:49:49.987060   10478 start.go:83] releasing machines lock for "bridge-810000", held for 2.248559375s
	W0805 10:49:49.987118   10478 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:50.002616   10478 out.go:177] * Deleting "bridge-810000" in qemu2 ...
	W0805 10:49:50.028498   10478 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:50.028532   10478 start.go:729] Will try again in 5 seconds ...
	I0805 10:49:55.030822   10478 start.go:360] acquireMachinesLock for bridge-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:49:55.031280   10478 start.go:364] duration metric: took 348.292µs to acquireMachinesLock for "bridge-810000"
	I0805 10:49:55.031476   10478 start.go:93] Provisioning new machine with config: &{Name:bridge-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:49:55.031760   10478 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:49:55.048503   10478 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:49:55.098786   10478 start.go:159] libmachine.API.Create for "bridge-810000" (driver="qemu2")
	I0805 10:49:55.098844   10478 client.go:168] LocalClient.Create starting
	I0805 10:49:55.098955   10478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:49:55.099015   10478 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:55.099032   10478 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:55.099097   10478 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:49:55.099154   10478 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:55.099164   10478 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:55.099760   10478 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:49:55.261889   10478 main.go:141] libmachine: Creating SSH key...
	I0805 10:49:55.311481   10478 main.go:141] libmachine: Creating Disk image...
	I0805 10:49:55.311486   10478 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:49:55.311681   10478 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2
	I0805 10:49:55.320810   10478 main.go:141] libmachine: STDOUT: 
	I0805 10:49:55.320825   10478 main.go:141] libmachine: STDERR: 
	I0805 10:49:55.320888   10478 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2 +20000M
	I0805 10:49:55.335927   10478 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:49:55.335947   10478 main.go:141] libmachine: STDERR: 
	I0805 10:49:55.335958   10478 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2
	I0805 10:49:55.335965   10478 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:49:55.335971   10478 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:49:55.336008   10478 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:ec:cd:25:e0:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/bridge-810000/disk.qcow2
	I0805 10:49:55.337560   10478 main.go:141] libmachine: STDOUT: 
	I0805 10:49:55.337575   10478 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:49:55.337587   10478 client.go:171] duration metric: took 238.740625ms to LocalClient.Create
	I0805 10:49:57.339741   10478 start.go:128] duration metric: took 2.307957958s to createHost
	I0805 10:49:57.339855   10478 start.go:83] releasing machines lock for "bridge-810000", held for 2.308579417s
	W0805 10:49:57.340238   10478 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:49:57.351859   10478 out.go:177] 
	W0805 10:49:57.359057   10478 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:49:57.359107   10478 out.go:239] * 
	* 
	W0805 10:49:57.361500   10478 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:49:57.371906   10478 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-810000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.016786458s)

                                                
                                                
-- stdout --
	* [kubenet-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-810000" primary control-plane node in "kubenet-810000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-810000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:49:59.584868   10591 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:49:59.584992   10591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:49:59.584995   10591 out.go:304] Setting ErrFile to fd 2...
	I0805 10:49:59.584997   10591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:49:59.585134   10591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:49:59.586171   10591 out.go:298] Setting JSON to false
	I0805 10:49:59.602262   10591 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6569,"bootTime":1722873630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:49:59.602330   10591 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:49:59.608860   10591 out.go:177] * [kubenet-810000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:49:59.615854   10591 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:49:59.615932   10591 notify.go:220] Checking for updates...
	I0805 10:49:59.622755   10591 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:49:59.625777   10591 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:49:59.628865   10591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:49:59.631792   10591 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:49:59.634891   10591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:49:59.638175   10591 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:49:59.638244   10591 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:49:59.638297   10591 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:49:59.642789   10591 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:49:59.649857   10591 start.go:297] selected driver: qemu2
	I0805 10:49:59.649865   10591 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:49:59.649873   10591 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:49:59.652285   10591 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:49:59.654751   10591 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:49:59.657875   10591 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:49:59.657892   10591 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0805 10:49:59.657919   10591 start.go:340] cluster config:
	{Name:kubenet-810000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:49:59.661602   10591 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:49:59.669764   10591 out.go:177] * Starting "kubenet-810000" primary control-plane node in "kubenet-810000" cluster
	I0805 10:49:59.673813   10591 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:49:59.673832   10591 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:49:59.673845   10591 cache.go:56] Caching tarball of preloaded images
	I0805 10:49:59.673916   10591 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:49:59.673921   10591 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:49:59.673985   10591 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/kubenet-810000/config.json ...
	I0805 10:49:59.673996   10591 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/kubenet-810000/config.json: {Name:mkc8386c968a13bbd40f1e6e97807b0293b54d31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:49:59.674234   10591 start.go:360] acquireMachinesLock for kubenet-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:49:59.674269   10591 start.go:364] duration metric: took 29.166µs to acquireMachinesLock for "kubenet-810000"
	I0805 10:49:59.674281   10591 start.go:93] Provisioning new machine with config: &{Name:kubenet-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:49:59.674312   10591 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:49:59.682773   10591 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:49:59.701319   10591 start.go:159] libmachine.API.Create for "kubenet-810000" (driver="qemu2")
	I0805 10:49:59.701352   10591 client.go:168] LocalClient.Create starting
	I0805 10:49:59.701426   10591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:49:59.701460   10591 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:59.701476   10591 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:59.701515   10591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:49:59.701540   10591 main.go:141] libmachine: Decoding PEM data...
	I0805 10:49:59.701548   10591 main.go:141] libmachine: Parsing certificate...
	I0805 10:49:59.701891   10591 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:49:59.854822   10591 main.go:141] libmachine: Creating SSH key...
	I0805 10:50:00.072866   10591 main.go:141] libmachine: Creating Disk image...
	I0805 10:50:00.072882   10591 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:50:00.073134   10591 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2
	I0805 10:50:00.082803   10591 main.go:141] libmachine: STDOUT: 
	I0805 10:50:00.082822   10591 main.go:141] libmachine: STDERR: 
	I0805 10:50:00.082892   10591 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2 +20000M
	I0805 10:50:00.090725   10591 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:50:00.090749   10591 main.go:141] libmachine: STDERR: 
	I0805 10:50:00.090762   10591 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2
	I0805 10:50:00.090771   10591 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:50:00.090784   10591 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:00.090818   10591 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:5d:12:1c:6a:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2
	I0805 10:50:00.092418   10591 main.go:141] libmachine: STDOUT: 
	I0805 10:50:00.092433   10591 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:00.092451   10591 client.go:171] duration metric: took 391.099209ms to LocalClient.Create
	I0805 10:50:02.094605   10591 start.go:128] duration metric: took 2.42030225s to createHost
	I0805 10:50:02.094654   10591 start.go:83] releasing machines lock for "kubenet-810000", held for 2.420405916s
	W0805 10:50:02.094721   10591 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:02.105805   10591 out.go:177] * Deleting "kubenet-810000" in qemu2 ...
	W0805 10:50:02.138701   10591 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:02.138724   10591 start.go:729] Will try again in 5 seconds ...
	I0805 10:50:07.140837   10591 start.go:360] acquireMachinesLock for kubenet-810000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:50:07.141262   10591 start.go:364] duration metric: took 327.375µs to acquireMachinesLock for "kubenet-810000"
	I0805 10:50:07.141377   10591 start.go:93] Provisioning new machine with config: &{Name:kubenet-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:50:07.141678   10591 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:50:07.157386   10591 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0805 10:50:07.206876   10591 start.go:159] libmachine.API.Create for "kubenet-810000" (driver="qemu2")
	I0805 10:50:07.206920   10591 client.go:168] LocalClient.Create starting
	I0805 10:50:07.207020   10591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:50:07.207080   10591 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:07.207125   10591 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:07.207186   10591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:50:07.207229   10591 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:07.207243   10591 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:07.207930   10591 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:50:07.370512   10591 main.go:141] libmachine: Creating SSH key...
	I0805 10:50:07.506714   10591 main.go:141] libmachine: Creating Disk image...
	I0805 10:50:07.506720   10591 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:50:07.506946   10591 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2
	I0805 10:50:07.516497   10591 main.go:141] libmachine: STDOUT: 
	I0805 10:50:07.516518   10591 main.go:141] libmachine: STDERR: 
	I0805 10:50:07.516585   10591 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2 +20000M
	I0805 10:50:07.524281   10591 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:50:07.524310   10591 main.go:141] libmachine: STDERR: 
	I0805 10:50:07.524324   10591 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2
	I0805 10:50:07.524328   10591 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:50:07.524335   10591 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:07.524368   10591 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:c8:37:1e:c4:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/kubenet-810000/disk.qcow2
	I0805 10:50:07.525972   10591 main.go:141] libmachine: STDOUT: 
	I0805 10:50:07.525988   10591 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:07.525998   10591 client.go:171] duration metric: took 319.07825ms to LocalClient.Create
	I0805 10:50:09.528147   10591 start.go:128] duration metric: took 2.386462292s to createHost
	I0805 10:50:09.528191   10591 start.go:83] releasing machines lock for "kubenet-810000", held for 2.386939542s
	W0805 10:50:09.528547   10591 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-810000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:09.545251   10591 out.go:177] 
	W0805 10:50:09.549289   10591 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:50:09.549313   10591 out.go:239] * 
	* 
	W0805 10:50:09.552084   10591 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:50:09.560210   10591 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.907677375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-935000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-935000" primary control-plane node in "old-k8s-version-935000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-935000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:50:11.756368   10702 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:50:11.756494   10702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:11.756497   10702 out.go:304] Setting ErrFile to fd 2...
	I0805 10:50:11.756500   10702 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:11.756618   10702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:50:11.757646   10702 out.go:298] Setting JSON to false
	I0805 10:50:11.773658   10702 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6581,"bootTime":1722873630,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:50:11.773729   10702 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:50:11.780088   10702 out.go:177] * [old-k8s-version-935000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:50:11.786011   10702 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:50:11.786052   10702 notify.go:220] Checking for updates...
	I0805 10:50:11.792924   10702 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:50:11.795970   10702 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:50:11.799020   10702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:50:11.800520   10702 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:50:11.804020   10702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:50:11.807367   10702 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:50:11.807435   10702 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:50:11.807487   10702 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:50:11.811841   10702 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:50:11.818978   10702 start.go:297] selected driver: qemu2
	I0805 10:50:11.818986   10702 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:50:11.818993   10702 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:50:11.821163   10702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:50:11.824103   10702 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:50:11.827050   10702 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:50:11.827063   10702 cni.go:84] Creating CNI manager for ""
	I0805 10:50:11.827070   10702 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 10:50:11.827099   10702 start.go:340] cluster config:
	{Name:old-k8s-version-935000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:50:11.830732   10702 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:11.836924   10702 out.go:177] * Starting "old-k8s-version-935000" primary control-plane node in "old-k8s-version-935000" cluster
	I0805 10:50:11.840981   10702 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 10:50:11.840997   10702 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 10:50:11.841008   10702 cache.go:56] Caching tarball of preloaded images
	I0805 10:50:11.841072   10702 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:50:11.841077   10702 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 10:50:11.841126   10702 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/old-k8s-version-935000/config.json ...
	I0805 10:50:11.841137   10702 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/old-k8s-version-935000/config.json: {Name:mk522bd7917e29b31221e227f1332ced8edac542 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:50:11.841544   10702 start.go:360] acquireMachinesLock for old-k8s-version-935000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:50:11.841578   10702 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "old-k8s-version-935000"
	I0805 10:50:11.841588   10702 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:50:11.841622   10702 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:50:11.849008   10702 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:50:11.866347   10702 start.go:159] libmachine.API.Create for "old-k8s-version-935000" (driver="qemu2")
	I0805 10:50:11.866381   10702 client.go:168] LocalClient.Create starting
	I0805 10:50:11.866444   10702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:50:11.866471   10702 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:11.866478   10702 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:11.866515   10702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:50:11.866538   10702 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:11.866545   10702 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:11.866954   10702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:50:12.018110   10702 main.go:141] libmachine: Creating SSH key...
	I0805 10:50:12.174823   10702 main.go:141] libmachine: Creating Disk image...
	I0805 10:50:12.174830   10702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:50:12.175030   10702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0805 10:50:12.184614   10702 main.go:141] libmachine: STDOUT: 
	I0805 10:50:12.184632   10702 main.go:141] libmachine: STDERR: 
	I0805 10:50:12.184675   10702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2 +20000M
	I0805 10:50:12.192419   10702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:50:12.192433   10702 main.go:141] libmachine: STDERR: 
	I0805 10:50:12.192449   10702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0805 10:50:12.192452   10702 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:50:12.192469   10702 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:12.192508   10702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b8:29:bf:ce:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0805 10:50:12.194173   10702 main.go:141] libmachine: STDOUT: 
	I0805 10:50:12.194187   10702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:12.194204   10702 client.go:171] duration metric: took 327.821875ms to LocalClient.Create
	I0805 10:50:14.196388   10702 start.go:128] duration metric: took 2.354775959s to createHost
	I0805 10:50:14.196452   10702 start.go:83] releasing machines lock for "old-k8s-version-935000", held for 2.354894792s
	W0805 10:50:14.196562   10702 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:14.211458   10702 out.go:177] * Deleting "old-k8s-version-935000" in qemu2 ...
	W0805 10:50:14.242998   10702 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:14.243028   10702 start.go:729] Will try again in 5 seconds ...
	I0805 10:50:19.245198   10702 start.go:360] acquireMachinesLock for old-k8s-version-935000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:50:19.245775   10702 start.go:364] duration metric: took 460.917µs to acquireMachinesLock for "old-k8s-version-935000"
	I0805 10:50:19.245930   10702 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:50:19.246365   10702 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:50:19.262854   10702 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:50:19.312421   10702 start.go:159] libmachine.API.Create for "old-k8s-version-935000" (driver="qemu2")
	I0805 10:50:19.312468   10702 client.go:168] LocalClient.Create starting
	I0805 10:50:19.312579   10702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:50:19.312649   10702 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:19.312666   10702 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:19.312717   10702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:50:19.312764   10702 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:19.312774   10702 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:19.313298   10702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:50:19.475536   10702 main.go:141] libmachine: Creating SSH key...
	I0805 10:50:19.572854   10702 main.go:141] libmachine: Creating Disk image...
	I0805 10:50:19.572859   10702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:50:19.573045   10702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0805 10:50:19.582399   10702 main.go:141] libmachine: STDOUT: 
	I0805 10:50:19.582415   10702 main.go:141] libmachine: STDERR: 
	I0805 10:50:19.582457   10702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2 +20000M
	I0805 10:50:19.590168   10702 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:50:19.590183   10702 main.go:141] libmachine: STDERR: 
	I0805 10:50:19.590195   10702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0805 10:50:19.590200   10702 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:50:19.590210   10702 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:19.590256   10702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:17:54:47:e0:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0805 10:50:19.591844   10702 main.go:141] libmachine: STDOUT: 
	I0805 10:50:19.591858   10702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:19.591870   10702 client.go:171] duration metric: took 279.399541ms to LocalClient.Create
	I0805 10:50:21.594016   10702 start.go:128] duration metric: took 2.347633834s to createHost
	I0805 10:50:21.594083   10702 start.go:83] releasing machines lock for "old-k8s-version-935000", held for 2.348310875s
	W0805 10:50:21.594429   10702 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-935000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:21.604017   10702 out.go:177] 
	W0805 10:50:21.611143   10702 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:50:21.611195   10702 out.go:239] * 
	* 
	W0805 10:50:21.613922   10702 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:50:21.622081   10702 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (65.728041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-935000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-935000 create -f testdata/busybox.yaml: exit status 1 (29.771667ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-935000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-935000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (29.970792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (29.391042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-935000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-935000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-935000 describe deploy/metrics-server -n kube-system: exit status 1 (26.730125ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-935000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-935000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (30.084375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.1872655s)

                                                
                                                
-- stdout --
	* [old-k8s-version-935000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-935000" primary control-plane node in "old-k8s-version-935000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-935000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-935000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:50:25.564735   10756 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:50:25.564872   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:25.564875   10756 out.go:304] Setting ErrFile to fd 2...
	I0805 10:50:25.564877   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:25.565007   10756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:50:25.566068   10756 out.go:298] Setting JSON to false
	I0805 10:50:25.582094   10756 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6595,"bootTime":1722873630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:50:25.582171   10756 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:50:25.586929   10756 out.go:177] * [old-k8s-version-935000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:50:25.594098   10756 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:50:25.594165   10756 notify.go:220] Checking for updates...
	I0805 10:50:25.600006   10756 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:50:25.603022   10756 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:50:25.604580   10756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:50:25.607990   10756 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:50:25.611069   10756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:50:25.614309   10756 config.go:182] Loaded profile config "old-k8s-version-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0805 10:50:25.617970   10756 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0805 10:50:25.621030   10756 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:50:25.626087   10756 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:50:25.633008   10756 start.go:297] selected driver: qemu2
	I0805 10:50:25.633014   10756 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-935000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:50:25.633076   10756 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:50:25.635552   10756 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:50:25.635600   10756 cni.go:84] Creating CNI manager for ""
	I0805 10:50:25.635610   10756 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 10:50:25.635629   10756 start.go:340] cluster config:
	{Name:old-k8s-version-935000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-935000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:50:25.639350   10756 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:25.646982   10756 out.go:177] * Starting "old-k8s-version-935000" primary control-plane node in "old-k8s-version-935000" cluster
	I0805 10:50:25.651047   10756 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 10:50:25.651062   10756 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 10:50:25.651072   10756 cache.go:56] Caching tarball of preloaded images
	I0805 10:50:25.651137   10756 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:50:25.651143   10756 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 10:50:25.651197   10756 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/old-k8s-version-935000/config.json ...
	I0805 10:50:25.651720   10756 start.go:360] acquireMachinesLock for old-k8s-version-935000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:50:25.651756   10756 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "old-k8s-version-935000"
	I0805 10:50:25.651765   10756 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:50:25.651772   10756 fix.go:54] fixHost starting: 
	I0805 10:50:25.651902   10756 fix.go:112] recreateIfNeeded on old-k8s-version-935000: state=Stopped err=<nil>
	W0805 10:50:25.651913   10756 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:50:25.655064   10756 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-935000" ...
	I0805 10:50:25.663044   10756 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:25.663094   10756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:17:54:47:e0:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0805 10:50:25.665209   10756 main.go:141] libmachine: STDOUT: 
	I0805 10:50:25.665232   10756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:25.665263   10756 fix.go:56] duration metric: took 13.491584ms for fixHost
	I0805 10:50:25.665269   10756 start.go:83] releasing machines lock for "old-k8s-version-935000", held for 13.508ms
	W0805 10:50:25.665275   10756 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:50:25.665310   10756 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:25.665315   10756 start.go:729] Will try again in 5 seconds ...
	I0805 10:50:30.667414   10756 start.go:360] acquireMachinesLock for old-k8s-version-935000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:50:30.667829   10756 start.go:364] duration metric: took 322.875µs to acquireMachinesLock for "old-k8s-version-935000"
	I0805 10:50:30.667924   10756 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:50:30.667947   10756 fix.go:54] fixHost starting: 
	I0805 10:50:30.668577   10756 fix.go:112] recreateIfNeeded on old-k8s-version-935000: state=Stopped err=<nil>
	W0805 10:50:30.668605   10756 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:50:30.676832   10756 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-935000" ...
	I0805 10:50:30.680861   10756 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:30.681059   10756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:17:54:47:e0:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/old-k8s-version-935000/disk.qcow2
	I0805 10:50:30.689733   10756 main.go:141] libmachine: STDOUT: 
	I0805 10:50:30.689795   10756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:30.689869   10756 fix.go:56] duration metric: took 21.927209ms for fixHost
	I0805 10:50:30.689890   10756 start.go:83] releasing machines lock for "old-k8s-version-935000", held for 22.0385ms
	W0805 10:50:30.690026   10756 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-935000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-935000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:30.697813   10756 out.go:177] 
	W0805 10:50:30.701890   10756 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:50:30.701912   10756 out.go:239] * 
	* 
	W0805 10:50:30.704510   10756 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:50:30.711843   10756 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-935000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (69.893458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-935000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (32.757291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-935000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-935000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-935000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.531209ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-935000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-935000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (29.352334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-935000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (29.76825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-935000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-935000 --alsologtostderr -v=1: exit status 83 (40.668375ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-935000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-935000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:50:30.983211   10775 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:50:30.983618   10775 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:30.983622   10775 out.go:304] Setting ErrFile to fd 2...
	I0805 10:50:30.983625   10775 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:30.983816   10775 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:50:30.984038   10775 out.go:298] Setting JSON to false
	I0805 10:50:30.984043   10775 mustload.go:65] Loading cluster: old-k8s-version-935000
	I0805 10:50:30.984245   10775 config.go:182] Loaded profile config "old-k8s-version-935000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0805 10:50:30.988715   10775 out.go:177] * The control-plane node old-k8s-version-935000 host is not running: state=Stopped
	I0805 10:50:30.991600   10775 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-935000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-935000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (29.114333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (30.567709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-935000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-519000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-519000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.943825459s)

                                                
                                                
-- stdout --
	* [no-preload-519000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-519000" primary control-plane node in "no-preload-519000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-519000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:50:31.298614   10792 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:50:31.298763   10792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:31.298766   10792 out.go:304] Setting ErrFile to fd 2...
	I0805 10:50:31.298768   10792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:31.298890   10792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:50:31.299929   10792 out.go:298] Setting JSON to false
	I0805 10:50:31.315923   10792 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6601,"bootTime":1722873630,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:50:31.315984   10792 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:50:31.319749   10792 out.go:177] * [no-preload-519000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:50:31.326661   10792 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:50:31.326733   10792 notify.go:220] Checking for updates...
	I0805 10:50:31.333701   10792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:50:31.336634   10792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:50:31.339677   10792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:50:31.342674   10792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:50:31.345636   10792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:50:31.349004   10792 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:50:31.349065   10792 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:50:31.349109   10792 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:50:31.353637   10792 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:50:31.360651   10792 start.go:297] selected driver: qemu2
	I0805 10:50:31.360664   10792 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:50:31.360671   10792 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:50:31.362768   10792 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:50:31.365612   10792 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:50:31.367166   10792 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:50:31.367181   10792 cni.go:84] Creating CNI manager for ""
	I0805 10:50:31.367187   10792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:50:31.367190   10792 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:50:31.367224   10792 start.go:340] cluster config:
	{Name:no-preload-519000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-519000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:50:31.370832   10792 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:31.378709   10792 out.go:177] * Starting "no-preload-519000" primary control-plane node in "no-preload-519000" cluster
	I0805 10:50:31.386687   10792 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 10:50:31.386780   10792 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/no-preload-519000/config.json ...
	I0805 10:50:31.386807   10792 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/no-preload-519000/config.json: {Name:mka1d71015b05e265c6ce062fcd44fe2cdbbce5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:50:31.386808   10792 cache.go:107] acquiring lock: {Name:mk51c9e880791de1866a5f6934617528daccd4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:31.386829   10792 cache.go:107] acquiring lock: {Name:mkc9c40f5e027e6069a86573383fdd05186f6b85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:31.386822   10792 cache.go:107] acquiring lock: {Name:mk268fe0297473e49c2bbec1fdcf0d78f556f9f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:31.386887   10792 cache.go:115] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0805 10:50:31.386893   10792 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 87.541µs
	I0805 10:50:31.386900   10792 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0805 10:50:31.386809   10792 cache.go:107] acquiring lock: {Name:mk9328730921d430f4d42ad477860f17c23ca42d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:31.386943   10792 cache.go:107] acquiring lock: {Name:mkb575c0ef7b99aaf2c325e14f00bace3e1222cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:31.386962   10792 cache.go:107] acquiring lock: {Name:mkcec2af92cdde934082a4f22c2e5baf6d27c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:31.386967   10792 cache.go:107] acquiring lock: {Name:mkd5b7294b510fbc40bfd9475fb625fe31b14417 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:31.387012   10792 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0805 10:50:31.387077   10792 cache.go:107] acquiring lock: {Name:mk62049903aab992192d259e11cb2f321e28d590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:31.387180   10792 start.go:360] acquireMachinesLock for no-preload-519000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:50:31.387213   10792 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 10:50:31.387244   10792 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 10:50:31.387264   10792 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0805 10:50:31.387306   10792 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 10:50:31.387347   10792 start.go:364] duration metric: took 159.708µs to acquireMachinesLock for "no-preload-519000"
	I0805 10:50:31.387354   10792 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 10:50:31.387359   10792 start.go:93] Provisioning new machine with config: &{Name:no-preload-519000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-519000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:50:31.387386   10792 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:50:31.387426   10792 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 10:50:31.395590   10792 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:50:31.399715   10792 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-rc.0
	I0805 10:50:31.399751   10792 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-rc.0
	I0805 10:50:31.399852   10792 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-rc.0
	I0805 10:50:31.399892   10792 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0805 10:50:31.400949   10792 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0805 10:50:31.400971   10792 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0805 10:50:31.401064   10792 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-rc.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-rc.0
	I0805 10:50:31.414216   10792 start.go:159] libmachine.API.Create for "no-preload-519000" (driver="qemu2")
	I0805 10:50:31.414241   10792 client.go:168] LocalClient.Create starting
	I0805 10:50:31.414345   10792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:50:31.414377   10792 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:31.414387   10792 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:31.414430   10792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:50:31.414455   10792 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:31.414462   10792 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:31.414839   10792 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:50:31.569220   10792 main.go:141] libmachine: Creating SSH key...
	I0805 10:50:31.658246   10792 main.go:141] libmachine: Creating Disk image...
	I0805 10:50:31.658283   10792 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:50:31.658488   10792 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2
	I0805 10:50:31.667898   10792 main.go:141] libmachine: STDOUT: 
	I0805 10:50:31.667925   10792 main.go:141] libmachine: STDERR: 
	I0805 10:50:31.667989   10792 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2 +20000M
	I0805 10:50:31.677127   10792 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:50:31.677153   10792 main.go:141] libmachine: STDERR: 
	I0805 10:50:31.677167   10792 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2
	I0805 10:50:31.677171   10792 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:50:31.677182   10792 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:31.677210   10792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:a9:2b:99:d2:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2
	I0805 10:50:31.679041   10792 main.go:141] libmachine: STDOUT: 
	I0805 10:50:31.679069   10792 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:31.679085   10792 client.go:171] duration metric: took 264.843583ms to LocalClient.Create
	I0805 10:50:31.798670   10792 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0
	I0805 10:50:31.805287   10792 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0
	I0805 10:50:31.806603   10792 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0
	I0805 10:50:31.827351   10792 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0805 10:50:31.856466   10792 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0805 10:50:31.899181   10792 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0
	I0805 10:50:31.953961   10792 cache.go:162] opening:  /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0805 10:50:31.967580   10792 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0805 10:50:31.967618   10792 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 580.791625ms
	I0805 10:50:31.967657   10792 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0805 10:50:33.679349   10792 start.go:128] duration metric: took 2.291953s to createHost
	I0805 10:50:33.679414   10792 start.go:83] releasing machines lock for "no-preload-519000", held for 2.292087916s
	W0805 10:50:33.679482   10792 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:33.695635   10792 out.go:177] * Deleting "no-preload-519000" in qemu2 ...
	W0805 10:50:33.725061   10792 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:33.725088   10792 start.go:729] Will try again in 5 seconds ...
	I0805 10:50:34.832734   10792 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0805 10:50:34.832788   10792 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.445919666s
	I0805 10:50:34.832816   10792 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0805 10:50:35.505914   10792 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0805 10:50:35.505961   10792 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 4.118953791s
	I0805 10:50:35.506008   10792 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0805 10:50:35.859237   10792 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0805 10:50:35.859314   10792 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 4.472563333s
	I0805 10:50:35.859345   10792 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0805 10:50:36.681229   10792 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0805 10:50:36.681280   10792 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 5.294378875s
	I0805 10:50:36.681307   10792 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0805 10:50:36.932512   10792 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0805 10:50:36.932561   10792 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 5.545824959s
	I0805 10:50:36.932585   10792 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0805 10:50:38.443665   10792 cache.go:157] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0805 10:50:38.443738   10792 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.056892917s
	I0805 10:50:38.443766   10792 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0805 10:50:38.443833   10792 cache.go:87] Successfully saved all images to host disk.
	I0805 10:50:38.727294   10792 start.go:360] acquireMachinesLock for no-preload-519000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:50:38.727666   10792 start.go:364] duration metric: took 310.5µs to acquireMachinesLock for "no-preload-519000"
	I0805 10:50:38.727761   10792 start.go:93] Provisioning new machine with config: &{Name:no-preload-519000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-519000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:50:38.728017   10792 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:50:38.738525   10792 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:50:38.790799   10792 start.go:159] libmachine.API.Create for "no-preload-519000" (driver="qemu2")
	I0805 10:50:38.790849   10792 client.go:168] LocalClient.Create starting
	I0805 10:50:38.790968   10792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:50:38.791036   10792 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:38.791058   10792 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:38.791130   10792 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:50:38.791173   10792 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:38.791190   10792 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:38.791784   10792 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:50:38.989676   10792 main.go:141] libmachine: Creating SSH key...
	I0805 10:50:39.154917   10792 main.go:141] libmachine: Creating Disk image...
	I0805 10:50:39.154923   10792 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:50:39.155146   10792 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2
	I0805 10:50:39.164706   10792 main.go:141] libmachine: STDOUT: 
	I0805 10:50:39.164728   10792 main.go:141] libmachine: STDERR: 
	I0805 10:50:39.164781   10792 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2 +20000M
	I0805 10:50:39.172710   10792 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:50:39.172744   10792 main.go:141] libmachine: STDERR: 
	I0805 10:50:39.172760   10792 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2
	I0805 10:50:39.172770   10792 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:50:39.172776   10792 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:39.172817   10792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:1c:da:11:5b:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2
	I0805 10:50:39.174552   10792 main.go:141] libmachine: STDOUT: 
	I0805 10:50:39.174569   10792 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:39.174581   10792 client.go:171] duration metric: took 383.732333ms to LocalClient.Create
	I0805 10:50:41.176424   10792 start.go:128] duration metric: took 2.4483815s to createHost
	I0805 10:50:41.176512   10792 start.go:83] releasing machines lock for "no-preload-519000", held for 2.448858833s
	W0805 10:50:41.176839   10792 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-519000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-519000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:41.185168   10792 out.go:177] 
	W0805 10:50:41.190320   10792 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:50:41.190348   10792 out.go:239] * 
	* 
	W0805 10:50:41.192716   10792 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:50:41.200213   10792 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-519000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000: exit status 7 (70.5515ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-519000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-519000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-519000 create -f testdata/busybox.yaml: exit status 1 (29.376292ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-519000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-519000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000: exit status 7 (29.537041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-519000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000: exit status 7 (29.470292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-519000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-519000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-519000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-519000 describe deploy/metrics-server -n kube-system: exit status 1 (26.795916ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-519000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-519000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000: exit status 7 (30.422458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-519000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-519000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-519000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.183228667s)

                                                
                                                
-- stdout --
	* [no-preload-519000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-519000" primary control-plane node in "no-preload-519000" cluster
	* Restarting existing qemu2 VM for "no-preload-519000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-519000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:50:45.099501   10873 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:50:45.099677   10873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:45.099680   10873 out.go:304] Setting ErrFile to fd 2...
	I0805 10:50:45.099682   10873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:45.099817   10873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:50:45.100793   10873 out.go:298] Setting JSON to false
	I0805 10:50:45.116857   10873 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6615,"bootTime":1722873630,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:50:45.116933   10873 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:50:45.121655   10873 out.go:177] * [no-preload-519000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:50:45.128557   10873 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:50:45.128623   10873 notify.go:220] Checking for updates...
	I0805 10:50:45.134513   10873 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:50:45.137486   10873 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:50:45.140546   10873 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:50:45.143470   10873 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:50:45.146568   10873 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:50:45.149885   10873 config.go:182] Loaded profile config "no-preload-519000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 10:50:45.150145   10873 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:50:45.154460   10873 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:50:45.161594   10873 start.go:297] selected driver: qemu2
	I0805 10:50:45.161600   10873 start.go:901] validating driver "qemu2" against &{Name:no-preload-519000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-519000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:50:45.161668   10873 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:50:45.163948   10873 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:50:45.163991   10873 cni.go:84] Creating CNI manager for ""
	I0805 10:50:45.163998   10873 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:50:45.164024   10873 start.go:340] cluster config:
	{Name:no-preload-519000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:no-preload-519000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:50:45.167600   10873 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:45.175563   10873 out.go:177] * Starting "no-preload-519000" primary control-plane node in "no-preload-519000" cluster
	I0805 10:50:45.179381   10873 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 10:50:45.179446   10873 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/no-preload-519000/config.json ...
	I0805 10:50:45.179492   10873 cache.go:107] acquiring lock: {Name:mk51c9e880791de1866a5f6934617528daccd4f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:45.179479   10873 cache.go:107] acquiring lock: {Name:mk9328730921d430f4d42ad477860f17c23ca42d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:45.179497   10873 cache.go:107] acquiring lock: {Name:mk268fe0297473e49c2bbec1fdcf0d78f556f9f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:45.179520   10873 cache.go:107] acquiring lock: {Name:mk62049903aab992192d259e11cb2f321e28d590 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:45.179524   10873 cache.go:107] acquiring lock: {Name:mkc9c40f5e027e6069a86573383fdd05186f6b85 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:45.179565   10873 cache.go:115] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 exists
	I0805 10:50:45.179612   10873 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0" took 121.333µs
	I0805 10:50:45.179619   10873 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-rc.0 succeeded
	I0805 10:50:45.179569   10873 cache.go:115] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 exists
	I0805 10:50:45.179584   10873 cache.go:115] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0805 10:50:45.179629   10873 cache.go:115] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0805 10:50:45.179634   10873 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 144.25µs
	I0805 10:50:45.179632   10873 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 109.125µs
	I0805 10:50:45.179638   10873 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0805 10:50:45.179628   10873 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0" took 109.042µs
	I0805 10:50:45.179642   10873 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-rc.0 succeeded
	I0805 10:50:45.179589   10873 cache.go:107] acquiring lock: {Name:mkd5b7294b510fbc40bfd9475fb625fe31b14417 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:45.179622   10873 cache.go:115] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 exists
	I0805 10:50:45.179640   10873 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0805 10:50:45.179590   10873 cache.go:107] acquiring lock: {Name:mkb575c0ef7b99aaf2c325e14f00bace3e1222cf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:45.179662   10873 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0" took 182.125µs
	I0805 10:50:45.179679   10873 cache.go:107] acquiring lock: {Name:mkcec2af92cdde934082a4f22c2e5baf6d27c6d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:45.179681   10873 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-rc.0 succeeded
	I0805 10:50:45.179671   10873 cache.go:115] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 exists
	I0805 10:50:45.179703   10873 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-rc.0" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0" took 114.584µs
	I0805 10:50:45.179712   10873 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-rc.0 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-rc.0 succeeded
	I0805 10:50:45.179705   10873 cache.go:115] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0805 10:50:45.179718   10873 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 144.334µs
	I0805 10:50:45.179721   10873 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0805 10:50:45.179723   10873 cache.go:115] /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0805 10:50:45.179726   10873 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 68.042µs
	I0805 10:50:45.179732   10873 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0805 10:50:45.179736   10873 cache.go:87] Successfully saved all images to host disk.
	I0805 10:50:45.179901   10873 start.go:360] acquireMachinesLock for no-preload-519000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:50:45.179937   10873 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "no-preload-519000"
	I0805 10:50:45.179945   10873 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:50:45.179950   10873 fix.go:54] fixHost starting: 
	I0805 10:50:45.180066   10873 fix.go:112] recreateIfNeeded on no-preload-519000: state=Stopped err=<nil>
	W0805 10:50:45.180077   10873 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:50:45.188382   10873 out.go:177] * Restarting existing qemu2 VM for "no-preload-519000" ...
	I0805 10:50:45.192464   10873 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:45.192504   10873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:1c:da:11:5b:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2
	I0805 10:50:45.194513   10873 main.go:141] libmachine: STDOUT: 
	I0805 10:50:45.194531   10873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:45.194557   10873 fix.go:56] duration metric: took 14.606125ms for fixHost
	I0805 10:50:45.194561   10873 start.go:83] releasing machines lock for "no-preload-519000", held for 14.620417ms
	W0805 10:50:45.194573   10873 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:50:45.194605   10873 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:45.194609   10873 start.go:729] Will try again in 5 seconds ...
	I0805 10:50:50.196712   10873 start.go:360] acquireMachinesLock for no-preload-519000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:50:50.197185   10873 start.go:364] duration metric: took 365.292µs to acquireMachinesLock for "no-preload-519000"
	I0805 10:50:50.197314   10873 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:50:50.197336   10873 fix.go:54] fixHost starting: 
	I0805 10:50:50.198053   10873 fix.go:112] recreateIfNeeded on no-preload-519000: state=Stopped err=<nil>
	W0805 10:50:50.198083   10873 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:50:50.203487   10873 out.go:177] * Restarting existing qemu2 VM for "no-preload-519000" ...
	I0805 10:50:50.210406   10873 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:50.210599   10873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:1c:da:11:5b:f9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/no-preload-519000/disk.qcow2
	I0805 10:50:50.219125   10873 main.go:141] libmachine: STDOUT: 
	I0805 10:50:50.219186   10873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:50.219263   10873 fix.go:56] duration metric: took 21.928625ms for fixHost
	I0805 10:50:50.219280   10873 start.go:83] releasing machines lock for "no-preload-519000", held for 22.062541ms
	W0805 10:50:50.219410   10873 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-519000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-519000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:50.227366   10873 out.go:177] 
	W0805 10:50:50.230523   10873 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:50:50.230554   10873 out.go:239] * 
	* 
	W0805 10:50:50.233421   10873 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:50:50.241399   10873 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-519000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000: exit status 7 (75.424333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-519000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-519000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000: exit status 7 (33.848875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-519000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-519000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-519000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-519000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.253834ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-519000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-519000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000: exit status 7 (30.389209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-519000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-519000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000: exit status 7 (30.299792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-519000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-519000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-519000 --alsologtostderr -v=1: exit status 83 (41.045ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-519000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-519000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:50:50.520410   10892 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:50:50.520562   10892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:50.520565   10892 out.go:304] Setting ErrFile to fd 2...
	I0805 10:50:50.520568   10892 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:50.520691   10892 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:50:50.520926   10892 out.go:298] Setting JSON to false
	I0805 10:50:50.520932   10892 mustload.go:65] Loading cluster: no-preload-519000
	I0805 10:50:50.521114   10892 config.go:182] Loaded profile config "no-preload-519000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 10:50:50.525309   10892 out.go:177] * The control-plane node no-preload-519000 host is not running: state=Stopped
	I0805 10:50:50.529301   10892 out.go:177]   To start a cluster, run: "minikube start -p no-preload-519000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-519000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000: exit status 7 (29.752166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-519000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000: exit status 7 (29.19225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-519000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-088000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-088000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.846171s)

                                                
                                                
-- stdout --
	* [embed-certs-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-088000" primary control-plane node in "embed-certs-088000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-088000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:50:50.833880   10910 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:50:50.834011   10910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:50.834014   10910 out.go:304] Setting ErrFile to fd 2...
	I0805 10:50:50.834017   10910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:50:50.834154   10910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:50:50.835243   10910 out.go:298] Setting JSON to false
	I0805 10:50:50.851425   10910 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6620,"bootTime":1722873630,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:50:50.851490   10910 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:50:50.856276   10910 out.go:177] * [embed-certs-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:50:50.862301   10910 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:50:50.862373   10910 notify.go:220] Checking for updates...
	I0805 10:50:50.868272   10910 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:50:50.871245   10910 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:50:50.874266   10910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:50:50.877228   10910 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:50:50.880271   10910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:50:50.883566   10910 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:50:50.883622   10910 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:50:50.883669   10910 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:50:50.887201   10910 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:50:50.894239   10910 start.go:297] selected driver: qemu2
	I0805 10:50:50.894246   10910 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:50:50.894252   10910 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:50:50.896330   10910 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:50:50.897554   10910 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:50:50.900370   10910 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:50:50.900402   10910 cni.go:84] Creating CNI manager for ""
	I0805 10:50:50.900409   10910 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:50:50.900412   10910 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:50:50.900450   10910 start.go:340] cluster config:
	{Name:embed-certs-088000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:50:50.903836   10910 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:50:50.911232   10910 out.go:177] * Starting "embed-certs-088000" primary control-plane node in "embed-certs-088000" cluster
	I0805 10:50:50.915220   10910 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:50:50.915233   10910 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:50:50.915240   10910 cache.go:56] Caching tarball of preloaded images
	I0805 10:50:50.915288   10910 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:50:50.915292   10910 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:50:50.915351   10910 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/embed-certs-088000/config.json ...
	I0805 10:50:50.915364   10910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/embed-certs-088000/config.json: {Name:mk8c15234ae3554544c36b9f0a9984d321da9646 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:50:50.915751   10910 start.go:360] acquireMachinesLock for embed-certs-088000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:50:50.915780   10910 start.go:364] duration metric: took 24.292µs to acquireMachinesLock for "embed-certs-088000"
	I0805 10:50:50.915790   10910 start.go:93] Provisioning new machine with config: &{Name:embed-certs-088000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:50:50.915816   10910 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:50:50.924242   10910 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:50:50.939626   10910 start.go:159] libmachine.API.Create for "embed-certs-088000" (driver="qemu2")
	I0805 10:50:50.939663   10910 client.go:168] LocalClient.Create starting
	I0805 10:50:50.939740   10910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:50:50.939771   10910 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:50.939780   10910 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:50.939820   10910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:50:50.939842   10910 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:50.939851   10910 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:50.940209   10910 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:50:51.092141   10910 main.go:141] libmachine: Creating SSH key...
	I0805 10:50:51.167745   10910 main.go:141] libmachine: Creating Disk image...
	I0805 10:50:51.167754   10910 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:50:51.167955   10910 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2
	I0805 10:50:51.177659   10910 main.go:141] libmachine: STDOUT: 
	I0805 10:50:51.177683   10910 main.go:141] libmachine: STDERR: 
	I0805 10:50:51.177752   10910 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2 +20000M
	I0805 10:50:51.186357   10910 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:50:51.186377   10910 main.go:141] libmachine: STDERR: 
	I0805 10:50:51.186406   10910 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2
	I0805 10:50:51.186411   10910 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:50:51.186428   10910 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:51.186474   10910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:f0:ef:95:db:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2
	I0805 10:50:51.188358   10910 main.go:141] libmachine: STDOUT: 
	I0805 10:50:51.188373   10910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:51.188393   10910 client.go:171] duration metric: took 248.728542ms to LocalClient.Create
	I0805 10:50:53.190673   10910 start.go:128] duration metric: took 2.274854583s to createHost
	I0805 10:50:53.190767   10910 start.go:83] releasing machines lock for "embed-certs-088000", held for 2.275007458s
	W0805 10:50:53.190939   10910 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:53.206091   10910 out.go:177] * Deleting "embed-certs-088000" in qemu2 ...
	W0805 10:50:53.232711   10910 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:50:53.232737   10910 start.go:729] Will try again in 5 seconds ...
	I0805 10:50:58.234683   10910 start.go:360] acquireMachinesLock for embed-certs-088000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:50:58.235160   10910 start.go:364] duration metric: took 384.291µs to acquireMachinesLock for "embed-certs-088000"
	I0805 10:50:58.235265   10910 start.go:93] Provisioning new machine with config: &{Name:embed-certs-088000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:50:58.235538   10910 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:50:58.248233   10910 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:50:58.296238   10910 start.go:159] libmachine.API.Create for "embed-certs-088000" (driver="qemu2")
	I0805 10:50:58.296295   10910 client.go:168] LocalClient.Create starting
	I0805 10:50:58.296417   10910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:50:58.296484   10910 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:58.296502   10910 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:58.296564   10910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:50:58.296607   10910 main.go:141] libmachine: Decoding PEM data...
	I0805 10:50:58.296623   10910 main.go:141] libmachine: Parsing certificate...
	I0805 10:50:58.297130   10910 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:50:58.462222   10910 main.go:141] libmachine: Creating SSH key...
	I0805 10:50:58.587726   10910 main.go:141] libmachine: Creating Disk image...
	I0805 10:50:58.587732   10910 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:50:58.587917   10910 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2
	I0805 10:50:58.596764   10910 main.go:141] libmachine: STDOUT: 
	I0805 10:50:58.596785   10910 main.go:141] libmachine: STDERR: 
	I0805 10:50:58.596835   10910 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2 +20000M
	I0805 10:50:58.604695   10910 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:50:58.604710   10910 main.go:141] libmachine: STDERR: 
	I0805 10:50:58.604721   10910 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2
	I0805 10:50:58.604733   10910 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:50:58.604744   10910 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:50:58.604770   10910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:04:a6:9a:87:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2
	I0805 10:50:58.606284   10910 main.go:141] libmachine: STDOUT: 
	I0805 10:50:58.606299   10910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:50:58.606311   10910 client.go:171] duration metric: took 310.010334ms to LocalClient.Create
	I0805 10:51:00.608514   10910 start.go:128] duration metric: took 2.372967041s to createHost
	I0805 10:51:00.608607   10910 start.go:83] releasing machines lock for "embed-certs-088000", held for 2.373456417s
	W0805 10:51:00.609038   10910 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:00.618562   10910 out.go:177] 
	W0805 10:51:00.625723   10910 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:51:00.625821   10910 out.go:239] * 
	* 
	W0805 10:51:00.628351   10910 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:51:00.637732   10910 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-088000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000: exit status 7 (68.794083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-088000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-088000 create -f testdata/busybox.yaml: exit status 1 (30.119083ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-088000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-088000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000: exit status 7 (30.77825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-088000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000: exit status 7 (30.07375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-088000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-088000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-088000 describe deploy/metrics-server -n kube-system: exit status 1 (26.928209ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-088000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-088000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000: exit status 7 (30.371042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-088000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-088000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.181701125s)

                                                
                                                
-- stdout --
	* [embed-certs-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-088000" primary control-plane node in "embed-certs-088000" cluster
	* Restarting existing qemu2 VM for "embed-certs-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-088000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:51:03.051742   10957 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:51:03.051878   10957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:03.051881   10957 out.go:304] Setting ErrFile to fd 2...
	I0805 10:51:03.051884   10957 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:03.052011   10957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:51:03.052922   10957 out.go:298] Setting JSON to false
	I0805 10:51:03.068905   10957 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6633,"bootTime":1722873630,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:51:03.068976   10957 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:51:03.074232   10957 out.go:177] * [embed-certs-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:51:03.081241   10957 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:51:03.081315   10957 notify.go:220] Checking for updates...
	I0805 10:51:03.088160   10957 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:51:03.091261   10957 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:51:03.094200   10957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:51:03.097219   10957 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:51:03.100203   10957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:51:03.103426   10957 config.go:182] Loaded profile config "embed-certs-088000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:51:03.103689   10957 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:51:03.108220   10957 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:51:03.114162   10957 start.go:297] selected driver: qemu2
	I0805 10:51:03.114169   10957 start.go:901] validating driver "qemu2" against &{Name:embed-certs-088000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:51:03.114234   10957 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:51:03.116459   10957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:51:03.116485   10957 cni.go:84] Creating CNI manager for ""
	I0805 10:51:03.116497   10957 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:51:03.116519   10957 start.go:340] cluster config:
	{Name:embed-certs-088000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-088000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:51:03.119903   10957 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:51:03.127233   10957 out.go:177] * Starting "embed-certs-088000" primary control-plane node in "embed-certs-088000" cluster
	I0805 10:51:03.131208   10957 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:51:03.131224   10957 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:51:03.131237   10957 cache.go:56] Caching tarball of preloaded images
	I0805 10:51:03.131297   10957 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:51:03.131305   10957 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:51:03.131373   10957 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/embed-certs-088000/config.json ...
	I0805 10:51:03.131715   10957 start.go:360] acquireMachinesLock for embed-certs-088000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:51:03.131753   10957 start.go:364] duration metric: took 28.958µs to acquireMachinesLock for "embed-certs-088000"
	I0805 10:51:03.131761   10957 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:51:03.131767   10957 fix.go:54] fixHost starting: 
	I0805 10:51:03.131889   10957 fix.go:112] recreateIfNeeded on embed-certs-088000: state=Stopped err=<nil>
	W0805 10:51:03.131897   10957 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:51:03.139249   10957 out.go:177] * Restarting existing qemu2 VM for "embed-certs-088000" ...
	I0805 10:51:03.147251   10957 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:51:03.147314   10957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:04:a6:9a:87:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2
	I0805 10:51:03.149406   10957 main.go:141] libmachine: STDOUT: 
	I0805 10:51:03.149428   10957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:51:03.149456   10957 fix.go:56] duration metric: took 17.691625ms for fixHost
	I0805 10:51:03.149461   10957 start.go:83] releasing machines lock for "embed-certs-088000", held for 17.703792ms
	W0805 10:51:03.149469   10957 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:51:03.149505   10957 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:03.149510   10957 start.go:729] Will try again in 5 seconds ...
	I0805 10:51:08.150600   10957 start.go:360] acquireMachinesLock for embed-certs-088000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:51:08.151074   10957 start.go:364] duration metric: took 335.25µs to acquireMachinesLock for "embed-certs-088000"
	I0805 10:51:08.151220   10957 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:51:08.151241   10957 fix.go:54] fixHost starting: 
	I0805 10:51:08.151930   10957 fix.go:112] recreateIfNeeded on embed-certs-088000: state=Stopped err=<nil>
	W0805 10:51:08.151956   10957 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:51:08.157515   10957 out.go:177] * Restarting existing qemu2 VM for "embed-certs-088000" ...
	I0805 10:51:08.161365   10957 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:51:08.161587   10957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:04:a6:9a:87:0c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/embed-certs-088000/disk.qcow2
	I0805 10:51:08.171125   10957 main.go:141] libmachine: STDOUT: 
	I0805 10:51:08.171200   10957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:51:08.171320   10957 fix.go:56] duration metric: took 20.062334ms for fixHost
	I0805 10:51:08.171341   10957 start.go:83] releasing machines lock for "embed-certs-088000", held for 20.243083ms
	W0805 10:51:08.171559   10957 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-088000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-088000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:08.179478   10957 out.go:177] 
	W0805 10:51:08.183459   10957 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:51:08.183487   10957 out.go:239] * 
	* 
	W0805 10:51:08.186177   10957 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:51:08.192439   10957 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-088000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000: exit status 7 (66.835875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-088000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000: exit status 7 (31.794833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-088000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-088000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-088000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.849542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-088000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-088000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000: exit status 7 (29.325875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-088000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000: exit status 7 (29.054584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-088000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-088000 --alsologtostderr -v=1: exit status 83 (39.481375ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-088000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-088000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:51:08.456064   10983 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:51:08.456205   10983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:08.456208   10983 out.go:304] Setting ErrFile to fd 2...
	I0805 10:51:08.456211   10983 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:08.456352   10983 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:51:08.456564   10983 out.go:298] Setting JSON to false
	I0805 10:51:08.456574   10983 mustload.go:65] Loading cluster: embed-certs-088000
	I0805 10:51:08.456770   10983 config.go:182] Loaded profile config "embed-certs-088000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:51:08.461353   10983 out.go:177] * The control-plane node embed-certs-088000 host is not running: state=Stopped
	I0805 10:51:08.465112   10983 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-088000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-088000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000: exit status 7 (29.662917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-088000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000: exit status 7 (28.087208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-088000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-325000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-325000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.989548667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-325000" primary control-plane node in "default-k8s-diff-port-325000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-325000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:51:08.873995   11007 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:51:08.874137   11007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:08.874140   11007 out.go:304] Setting ErrFile to fd 2...
	I0805 10:51:08.874143   11007 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:08.874265   11007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:51:08.875349   11007 out.go:298] Setting JSON to false
	I0805 10:51:08.891234   11007 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6638,"bootTime":1722873630,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:51:08.891298   11007 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:51:08.896281   11007 out.go:177] * [default-k8s-diff-port-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:51:08.903231   11007 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:51:08.903272   11007 notify.go:220] Checking for updates...
	I0805 10:51:08.908744   11007 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:51:08.912218   11007 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:51:08.915219   11007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:51:08.918222   11007 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:51:08.921204   11007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:51:08.924578   11007 config.go:182] Loaded profile config "cert-expiration-440000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:51:08.924642   11007 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:51:08.924698   11007 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:51:08.929149   11007 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:51:08.936231   11007 start.go:297] selected driver: qemu2
	I0805 10:51:08.936238   11007 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:51:08.936244   11007 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:51:08.938626   11007 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:51:08.941227   11007 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:51:08.944257   11007 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:51:08.944316   11007 cni.go:84] Creating CNI manager for ""
	I0805 10:51:08.944325   11007 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:51:08.944330   11007 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:51:08.944360   11007 start.go:340] cluster config:
	{Name:default-k8s-diff-port-325000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:51:08.948122   11007 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:51:08.955101   11007 out.go:177] * Starting "default-k8s-diff-port-325000" primary control-plane node in "default-k8s-diff-port-325000" cluster
	I0805 10:51:08.959201   11007 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:51:08.959225   11007 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:51:08.959238   11007 cache.go:56] Caching tarball of preloaded images
	I0805 10:51:08.959311   11007 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:51:08.959325   11007 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:51:08.959380   11007 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/default-k8s-diff-port-325000/config.json ...
	I0805 10:51:08.959396   11007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/default-k8s-diff-port-325000/config.json: {Name:mk2e59462bf7c94fbbe55a2a34df82a956d31ba3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:51:08.959627   11007 start.go:360] acquireMachinesLock for default-k8s-diff-port-325000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:51:08.959662   11007 start.go:364] duration metric: took 27.459µs to acquireMachinesLock for "default-k8s-diff-port-325000"
	I0805 10:51:08.959673   11007 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:51:08.959717   11007 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:51:08.967225   11007 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:51:08.983862   11007 start.go:159] libmachine.API.Create for "default-k8s-diff-port-325000" (driver="qemu2")
	I0805 10:51:08.983892   11007 client.go:168] LocalClient.Create starting
	I0805 10:51:08.983956   11007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:51:08.983986   11007 main.go:141] libmachine: Decoding PEM data...
	I0805 10:51:08.983996   11007 main.go:141] libmachine: Parsing certificate...
	I0805 10:51:08.984037   11007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:51:08.984059   11007 main.go:141] libmachine: Decoding PEM data...
	I0805 10:51:08.984066   11007 main.go:141] libmachine: Parsing certificate...
	I0805 10:51:08.984399   11007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:51:09.140573   11007 main.go:141] libmachine: Creating SSH key...
	I0805 10:51:09.340898   11007 main.go:141] libmachine: Creating Disk image...
	I0805 10:51:09.340908   11007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:51:09.341115   11007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2
	I0805 10:51:09.350868   11007 main.go:141] libmachine: STDOUT: 
	I0805 10:51:09.350884   11007 main.go:141] libmachine: STDERR: 
	I0805 10:51:09.350943   11007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2 +20000M
	I0805 10:51:09.358853   11007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:51:09.358867   11007 main.go:141] libmachine: STDERR: 
	I0805 10:51:09.358881   11007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2
	I0805 10:51:09.358889   11007 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:51:09.358901   11007 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:51:09.358927   11007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:ca:f9:aa:ea:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2
	I0805 10:51:09.360544   11007 main.go:141] libmachine: STDOUT: 
	I0805 10:51:09.360558   11007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:51:09.360583   11007 client.go:171] duration metric: took 376.692541ms to LocalClient.Create
	I0805 10:51:11.362725   11007 start.go:128] duration metric: took 2.403017792s to createHost
	I0805 10:51:11.362860   11007 start.go:83] releasing machines lock for "default-k8s-diff-port-325000", held for 2.403160167s
	W0805 10:51:11.362933   11007 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:11.373911   11007 out.go:177] * Deleting "default-k8s-diff-port-325000" in qemu2 ...
	W0805 10:51:11.407275   11007 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:11.407301   11007 start.go:729] Will try again in 5 seconds ...
	I0805 10:51:16.409359   11007 start.go:360] acquireMachinesLock for default-k8s-diff-port-325000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:51:16.409587   11007 start.go:364] duration metric: took 174.167µs to acquireMachinesLock for "default-k8s-diff-port-325000"
	I0805 10:51:16.409674   11007 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:51:16.409829   11007 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:51:16.417152   11007 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:51:16.469317   11007 start.go:159] libmachine.API.Create for "default-k8s-diff-port-325000" (driver="qemu2")
	I0805 10:51:16.469363   11007 client.go:168] LocalClient.Create starting
	I0805 10:51:16.469464   11007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:51:16.469549   11007 main.go:141] libmachine: Decoding PEM data...
	I0805 10:51:16.469566   11007 main.go:141] libmachine: Parsing certificate...
	I0805 10:51:16.469645   11007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:51:16.469701   11007 main.go:141] libmachine: Decoding PEM data...
	I0805 10:51:16.469714   11007 main.go:141] libmachine: Parsing certificate...
	I0805 10:51:16.470284   11007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:51:16.643136   11007 main.go:141] libmachine: Creating SSH key...
	I0805 10:51:16.769547   11007 main.go:141] libmachine: Creating Disk image...
	I0805 10:51:16.769555   11007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:51:16.769773   11007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2
	I0805 10:51:16.779226   11007 main.go:141] libmachine: STDOUT: 
	I0805 10:51:16.779243   11007 main.go:141] libmachine: STDERR: 
	I0805 10:51:16.779289   11007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2 +20000M
	I0805 10:51:16.787105   11007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:51:16.787142   11007 main.go:141] libmachine: STDERR: 
	I0805 10:51:16.787153   11007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2
	I0805 10:51:16.787157   11007 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:51:16.787164   11007 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:51:16.787192   11007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:e2:0a:6e:7d:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2
	I0805 10:51:16.788841   11007 main.go:141] libmachine: STDOUT: 
	I0805 10:51:16.788855   11007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:51:16.788865   11007 client.go:171] duration metric: took 319.500542ms to LocalClient.Create
	I0805 10:51:18.791019   11007 start.go:128] duration metric: took 2.381199s to createHost
	I0805 10:51:18.791089   11007 start.go:83] releasing machines lock for "default-k8s-diff-port-325000", held for 2.381517334s
	W0805 10:51:18.791572   11007 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-325000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-325000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:18.802160   11007 out.go:177] 
	W0805 10:51:18.809028   11007 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:51:18.809052   11007 out.go:239] * 
	* 
	W0805 10:51:18.811668   11007 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:51:18.821093   11007 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-325000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000: exit status 7 (63.885042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-907000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-907000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (9.897151833s)

                                                
                                                
-- stdout --
	* [newest-cni-907000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-907000" primary control-plane node in "newest-cni-907000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-907000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:51:12.529617   11023 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:51:12.529739   11023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:12.529742   11023 out.go:304] Setting ErrFile to fd 2...
	I0805 10:51:12.529745   11023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:12.529875   11023 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:51:12.530921   11023 out.go:298] Setting JSON to false
	I0805 10:51:12.547314   11023 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6642,"bootTime":1722873630,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:51:12.547376   11023 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:51:12.553189   11023 out.go:177] * [newest-cni-907000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:51:12.560043   11023 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:51:12.560096   11023 notify.go:220] Checking for updates...
	I0805 10:51:12.567050   11023 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:51:12.570022   11023 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:51:12.573074   11023 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:51:12.576011   11023 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:51:12.579050   11023 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:51:12.582326   11023 config.go:182] Loaded profile config "default-k8s-diff-port-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:51:12.582405   11023 config.go:182] Loaded profile config "multinode-022000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:51:12.582464   11023 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:51:12.585974   11023 out.go:177] * Using the qemu2 driver based on user configuration
	I0805 10:51:12.592931   11023 start.go:297] selected driver: qemu2
	I0805 10:51:12.592937   11023 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:51:12.592943   11023 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:51:12.595304   11023 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0805 10:51:12.595326   11023 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0805 10:51:12.603016   11023 out.go:177] * Automatically selected the socket_vmnet network
	I0805 10:51:12.606107   11023 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0805 10:51:12.606147   11023 cni.go:84] Creating CNI manager for ""
	I0805 10:51:12.606163   11023 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:51:12.606167   11023 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:51:12.606203   11023 start.go:340] cluster config:
	{Name:newest-cni-907000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:51:12.610050   11023 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:51:12.618053   11023 out.go:177] * Starting "newest-cni-907000" primary control-plane node in "newest-cni-907000" cluster
	I0805 10:51:12.622015   11023 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 10:51:12.622034   11023 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 10:51:12.622045   11023 cache.go:56] Caching tarball of preloaded images
	I0805 10:51:12.622120   11023 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:51:12.622127   11023 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 10:51:12.622201   11023 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/newest-cni-907000/config.json ...
	I0805 10:51:12.622217   11023 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/newest-cni-907000/config.json: {Name:mk90d4c7f0caac288c21da9e717800e3eb8069c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:51:12.622441   11023 start.go:360] acquireMachinesLock for newest-cni-907000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:51:12.622477   11023 start.go:364] duration metric: took 29.75µs to acquireMachinesLock for "newest-cni-907000"
	I0805 10:51:12.622489   11023 start.go:93] Provisioning new machine with config: &{Name:newest-cni-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:51:12.622521   11023 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:51:12.630999   11023 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:51:12.649181   11023 start.go:159] libmachine.API.Create for "newest-cni-907000" (driver="qemu2")
	I0805 10:51:12.649207   11023 client.go:168] LocalClient.Create starting
	I0805 10:51:12.649268   11023 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:51:12.649305   11023 main.go:141] libmachine: Decoding PEM data...
	I0805 10:51:12.649315   11023 main.go:141] libmachine: Parsing certificate...
	I0805 10:51:12.649353   11023 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:51:12.649376   11023 main.go:141] libmachine: Decoding PEM data...
	I0805 10:51:12.649384   11023 main.go:141] libmachine: Parsing certificate...
	I0805 10:51:12.649823   11023 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:51:12.804673   11023 main.go:141] libmachine: Creating SSH key...
	I0805 10:51:12.916880   11023 main.go:141] libmachine: Creating Disk image...
	I0805 10:51:12.916886   11023 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:51:12.917087   11023 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2
	I0805 10:51:12.926406   11023 main.go:141] libmachine: STDOUT: 
	I0805 10:51:12.926424   11023 main.go:141] libmachine: STDERR: 
	I0805 10:51:12.926488   11023 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2 +20000M
	I0805 10:51:12.934325   11023 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:51:12.934340   11023 main.go:141] libmachine: STDERR: 
	I0805 10:51:12.934357   11023 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2
	I0805 10:51:12.934362   11023 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:51:12.934374   11023 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:51:12.934401   11023 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:bf:12:3b:c3:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2
	I0805 10:51:12.936027   11023 main.go:141] libmachine: STDOUT: 
	I0805 10:51:12.936039   11023 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:51:12.936058   11023 client.go:171] duration metric: took 286.849708ms to LocalClient.Create
	I0805 10:51:14.938225   11023 start.go:128] duration metric: took 2.315716333s to createHost
	I0805 10:51:14.938274   11023 start.go:83] releasing machines lock for "newest-cni-907000", held for 2.315815667s
	W0805 10:51:14.938358   11023 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:14.948461   11023 out.go:177] * Deleting "newest-cni-907000" in qemu2 ...
	W0805 10:51:14.980345   11023 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:14.980374   11023 start.go:729] Will try again in 5 seconds ...
	I0805 10:51:19.982464   11023 start.go:360] acquireMachinesLock for newest-cni-907000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:51:19.982928   11023 start.go:364] duration metric: took 310.25µs to acquireMachinesLock for "newest-cni-907000"
	I0805 10:51:19.983056   11023 start.go:93] Provisioning new machine with config: &{Name:newest-cni-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 10:51:19.983326   11023 start.go:125] createHost starting for "" (driver="qemu2")
	I0805 10:51:19.994041   11023 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 10:51:20.044150   11023 start.go:159] libmachine.API.Create for "newest-cni-907000" (driver="qemu2")
	I0805 10:51:20.044206   11023 client.go:168] LocalClient.Create starting
	I0805 10:51:20.044302   11023 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/ca.pem
	I0805 10:51:20.044350   11023 main.go:141] libmachine: Decoding PEM data...
	I0805 10:51:20.044366   11023 main.go:141] libmachine: Parsing certificate...
	I0805 10:51:20.044444   11023 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19374-6507/.minikube/certs/cert.pem
	I0805 10:51:20.044477   11023 main.go:141] libmachine: Decoding PEM data...
	I0805 10:51:20.044488   11023 main.go:141] libmachine: Parsing certificate...
	I0805 10:51:20.045121   11023 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso...
	I0805 10:51:20.212949   11023 main.go:141] libmachine: Creating SSH key...
	I0805 10:51:20.338453   11023 main.go:141] libmachine: Creating Disk image...
	I0805 10:51:20.338458   11023 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0805 10:51:20.338641   11023 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2.raw /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2
	I0805 10:51:20.347774   11023 main.go:141] libmachine: STDOUT: 
	I0805 10:51:20.347803   11023 main.go:141] libmachine: STDERR: 
	I0805 10:51:20.347861   11023 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2 +20000M
	I0805 10:51:20.355693   11023 main.go:141] libmachine: STDOUT: Image resized.
	
	I0805 10:51:20.355706   11023 main.go:141] libmachine: STDERR: 
	I0805 10:51:20.355722   11023 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2
	I0805 10:51:20.355727   11023 main.go:141] libmachine: Starting QEMU VM...
	I0805 10:51:20.355739   11023 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:51:20.355764   11023 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:2b:29:bd:58:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2
	I0805 10:51:20.357325   11023 main.go:141] libmachine: STDOUT: 
	I0805 10:51:20.357341   11023 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:51:20.357354   11023 client.go:171] duration metric: took 313.147541ms to LocalClient.Create
	I0805 10:51:22.359508   11023 start.go:128] duration metric: took 2.376184666s to createHost
	I0805 10:51:22.359569   11023 start.go:83] releasing machines lock for "newest-cni-907000", held for 2.376645958s
	W0805 10:51:22.359896   11023 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-907000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:22.369827   11023 out.go:177] 
	W0805 10:51:22.377890   11023 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:51:22.377914   11023 out.go:239] * 
	* 
	W0805 10:51:22.380405   11023 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:51:22.387810   11023 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-907000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000: exit status 7 (62.217333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-907000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-325000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-325000 create -f testdata/busybox.yaml: exit status 1 (30.017417ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-325000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-325000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000: exit status 7 (28.829459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-325000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000: exit status 7 (28.457125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-325000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-325000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-325000 describe deploy/metrics-server -n kube-system: exit status 1 (26.656625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-325000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-325000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000: exit status 7 (29.249125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-325000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-325000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.212424291s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-325000" primary control-plane node in "default-k8s-diff-port-325000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-325000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-325000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:51:21.266086   11071 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:51:21.266211   11071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:21.266215   11071 out.go:304] Setting ErrFile to fd 2...
	I0805 10:51:21.266217   11071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:21.266350   11071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:51:21.267339   11071 out.go:298] Setting JSON to false
	I0805 10:51:21.283210   11071 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6651,"bootTime":1722873630,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:51:21.283272   11071 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:51:21.288240   11071 out.go:177] * [default-k8s-diff-port-325000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:51:21.294953   11071 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:51:21.295035   11071 notify.go:220] Checking for updates...
	I0805 10:51:21.302029   11071 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:51:21.303300   11071 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:51:21.306050   11071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:51:21.309084   11071 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:51:21.312098   11071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:51:21.315320   11071 config.go:182] Loaded profile config "default-k8s-diff-port-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:51:21.315597   11071 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:51:21.320141   11071 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:51:21.329084   11071 start.go:297] selected driver: qemu2
	I0805 10:51:21.329091   11071 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-325000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:51:21.329157   11071 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:51:21.331382   11071 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 10:51:21.331407   11071 cni.go:84] Creating CNI manager for ""
	I0805 10:51:21.331414   11071 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:51:21.331450   11071 start.go:340] cluster config:
	{Name:default-k8s-diff-port-325000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-325000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:51:21.334870   11071 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:51:21.341991   11071 out.go:177] * Starting "default-k8s-diff-port-325000" primary control-plane node in "default-k8s-diff-port-325000" cluster
	I0805 10:51:21.345059   11071 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:51:21.345081   11071 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:51:21.345098   11071 cache.go:56] Caching tarball of preloaded images
	I0805 10:51:21.345169   11071 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:51:21.345175   11071 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:51:21.345236   11071 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/default-k8s-diff-port-325000/config.json ...
	I0805 10:51:21.345728   11071 start.go:360] acquireMachinesLock for default-k8s-diff-port-325000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:51:22.359680   11071 start.go:364] duration metric: took 1.013942708s to acquireMachinesLock for "default-k8s-diff-port-325000"
	I0805 10:51:22.359876   11071 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:51:22.359934   11071 fix.go:54] fixHost starting: 
	I0805 10:51:22.360668   11071 fix.go:112] recreateIfNeeded on default-k8s-diff-port-325000: state=Stopped err=<nil>
	W0805 10:51:22.360717   11071 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:51:22.369826   11071 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-325000" ...
	I0805 10:51:22.377860   11071 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:51:22.378038   11071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:e2:0a:6e:7d:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2
	I0805 10:51:22.387810   11071 main.go:141] libmachine: STDOUT: 
	I0805 10:51:22.387861   11071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:51:22.388014   11071 fix.go:56] duration metric: took 28.088583ms for fixHost
	I0805 10:51:22.388033   11071 start.go:83] releasing machines lock for "default-k8s-diff-port-325000", held for 28.320916ms
	W0805 10:51:22.388063   11071 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:51:22.388278   11071 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:22.388300   11071 start.go:729] Will try again in 5 seconds ...
	I0805 10:51:27.390439   11071 start.go:360] acquireMachinesLock for default-k8s-diff-port-325000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:51:27.390919   11071 start.go:364] duration metric: took 359.042µs to acquireMachinesLock for "default-k8s-diff-port-325000"
	I0805 10:51:27.391039   11071 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:51:27.391056   11071 fix.go:54] fixHost starting: 
	I0805 10:51:27.391777   11071 fix.go:112] recreateIfNeeded on default-k8s-diff-port-325000: state=Stopped err=<nil>
	W0805 10:51:27.391807   11071 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:51:27.401469   11071 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-325000" ...
	I0805 10:51:27.404440   11071 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:51:27.404673   11071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:e2:0a:6e:7d:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/default-k8s-diff-port-325000/disk.qcow2
	I0805 10:51:27.413640   11071 main.go:141] libmachine: STDOUT: 
	I0805 10:51:27.413710   11071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:51:27.413793   11071 fix.go:56] duration metric: took 22.736667ms for fixHost
	I0805 10:51:27.413816   11071 start.go:83] releasing machines lock for "default-k8s-diff-port-325000", held for 22.875959ms
	W0805 10:51:27.414023   11071 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-325000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-325000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:27.421490   11071 out.go:177] 
	W0805 10:51:27.425751   11071 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:51:27.425782   11071 out.go:239] * 
	* 
	W0805 10:51:27.428489   11071 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:51:27.437452   11071 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-325000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000: exit status 7 (65.499791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-907000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-907000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0: exit status 80 (5.187179292s)

                                                
                                                
-- stdout --
	* [newest-cni-907000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-907000" primary control-plane node in "newest-cni-907000" cluster
	* Restarting existing qemu2 VM for "newest-cni-907000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-907000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:51:26.251031   11106 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:51:26.251166   11106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:26.251170   11106 out.go:304] Setting ErrFile to fd 2...
	I0805 10:51:26.251172   11106 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:26.251313   11106 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:51:26.252519   11106 out.go:298] Setting JSON to false
	I0805 10:51:26.268968   11106 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6656,"bootTime":1722873630,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:51:26.269038   11106 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:51:26.276450   11106 out.go:177] * [newest-cni-907000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:51:26.284619   11106 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:51:26.284664   11106 notify.go:220] Checking for updates...
	I0805 10:51:26.290543   11106 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:51:26.293583   11106 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:51:26.294874   11106 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:51:26.297573   11106 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:51:26.300648   11106 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:51:26.303913   11106 config.go:182] Loaded profile config "newest-cni-907000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 10:51:26.304185   11106 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:51:26.308532   11106 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:51:26.315583   11106 start.go:297] selected driver: qemu2
	I0805 10:51:26.315593   11106 start.go:901] validating driver "qemu2" against &{Name:newest-cni-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-907000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:51:26.315659   11106 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:51:26.318114   11106 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0805 10:51:26.318168   11106 cni.go:84] Creating CNI manager for ""
	I0805 10:51:26.318176   11106 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:51:26.318212   11106 start.go:340] cluster config:
	{Name:newest-cni-907000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:newest-cni-907000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:51:26.321961   11106 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:51:26.330577   11106 out.go:177] * Starting "newest-cni-907000" primary control-plane node in "newest-cni-907000" cluster
	I0805 10:51:26.334504   11106 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 10:51:26.334523   11106 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 10:51:26.334534   11106 cache.go:56] Caching tarball of preloaded images
	I0805 10:51:26.334597   11106 preload.go:172] Found /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0805 10:51:26.334604   11106 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 10:51:26.334662   11106 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/newest-cni-907000/config.json ...
	I0805 10:51:26.335109   11106 start.go:360] acquireMachinesLock for newest-cni-907000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:51:26.335141   11106 start.go:364] duration metric: took 26.583µs to acquireMachinesLock for "newest-cni-907000"
	I0805 10:51:26.335149   11106 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:51:26.335156   11106 fix.go:54] fixHost starting: 
	I0805 10:51:26.335271   11106 fix.go:112] recreateIfNeeded on newest-cni-907000: state=Stopped err=<nil>
	W0805 10:51:26.335279   11106 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:51:26.338640   11106 out.go:177] * Restarting existing qemu2 VM for "newest-cni-907000" ...
	I0805 10:51:26.346535   11106 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:51:26.346573   11106 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:2b:29:bd:58:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2
	I0805 10:51:26.348631   11106 main.go:141] libmachine: STDOUT: 
	I0805 10:51:26.348679   11106 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:51:26.348706   11106 fix.go:56] duration metric: took 13.551041ms for fixHost
	I0805 10:51:26.348711   11106 start.go:83] releasing machines lock for "newest-cni-907000", held for 13.566208ms
	W0805 10:51:26.348717   11106 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:51:26.348751   11106 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:26.348755   11106 start.go:729] Will try again in 5 seconds ...
	I0805 10:51:31.351019   11106 start.go:360] acquireMachinesLock for newest-cni-907000: {Name:mkb00f076c5be0afa0d8c4c2e732c0f60f89f86b Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 10:51:31.351461   11106 start.go:364] duration metric: took 331.083µs to acquireMachinesLock for "newest-cni-907000"
	I0805 10:51:31.351598   11106 start.go:96] Skipping create...Using existing machine configuration
	I0805 10:51:31.351618   11106 fix.go:54] fixHost starting: 
	I0805 10:51:31.352415   11106 fix.go:112] recreateIfNeeded on newest-cni-907000: state=Stopped err=<nil>
	W0805 10:51:31.352445   11106 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 10:51:31.357917   11106 out.go:177] * Restarting existing qemu2 VM for "newest-cni-907000" ...
	I0805 10:51:31.364848   11106 qemu.go:418] Using hvf for hardware acceleration
	I0805 10:51:31.365045   11106 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:2b:29:bd:58:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19374-6507/.minikube/machines/newest-cni-907000/disk.qcow2
	I0805 10:51:31.374525   11106 main.go:141] libmachine: STDOUT: 
	I0805 10:51:31.374597   11106 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0805 10:51:31.374727   11106 fix.go:56] duration metric: took 23.107959ms for fixHost
	I0805 10:51:31.374755   11106 start.go:83] releasing machines lock for "newest-cni-907000", held for 23.270334ms
	W0805 10:51:31.375040   11106 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-907000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-907000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0805 10:51:31.383886   11106 out.go:177] 
	W0805 10:51:31.386902   11106 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0805 10:51:31.386927   11106 out.go:239] * 
	* 
	W0805 10:51:31.389651   11106 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:51:31.401884   11106 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-907000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-rc.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000: exit status 7 (68.140958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-907000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-325000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000: exit status 7 (32.016833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-325000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-325000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-325000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.90725ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-325000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-325000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000: exit status 7 (29.2485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-325000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000: exit status 7 (28.520583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-325000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-325000 --alsologtostderr -v=1: exit status 83 (39.442375ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-325000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-325000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:51:27.699278   11125 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:51:27.699504   11125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:27.699507   11125 out.go:304] Setting ErrFile to fd 2...
	I0805 10:51:27.699509   11125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:27.699637   11125 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:51:27.699867   11125 out.go:298] Setting JSON to false
	I0805 10:51:27.699873   11125 mustload.go:65] Loading cluster: default-k8s-diff-port-325000
	I0805 10:51:27.700080   11125 config.go:182] Loaded profile config "default-k8s-diff-port-325000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:51:27.703785   11125 out.go:177] * The control-plane node default-k8s-diff-port-325000 host is not running: state=Stopped
	I0805 10:51:27.707584   11125 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-325000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-325000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000: exit status 7 (28.998625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-325000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000: exit status 7 (28.832041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-325000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-907000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-rc.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-rc.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-rc.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000: exit status 7 (29.592625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-907000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-907000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-907000 --alsologtostderr -v=1: exit status 83 (40.879ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-907000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-907000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:51:31.581836   11151 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:51:31.581977   11151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:31.581980   11151 out.go:304] Setting ErrFile to fd 2...
	I0805 10:51:31.581983   11151 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:51:31.582114   11151 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:51:31.582314   11151 out.go:298] Setting JSON to false
	I0805 10:51:31.582319   11151 mustload.go:65] Loading cluster: newest-cni-907000
	I0805 10:51:31.582507   11151 config.go:182] Loaded profile config "newest-cni-907000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-rc.0
	I0805 10:51:31.585493   11151 out.go:177] * The control-plane node newest-cni-907000 host is not running: state=Stopped
	I0805 10:51:31.589518   11151 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-907000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-907000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000: exit status 7 (29.475333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-907000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000: exit status 7 (30.739083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-907000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 12.88
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-rc.0/json-events 16.33
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.29
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 10.41
48 TestErrorSpam/start 0.37
49 TestErrorSpam/status 0.09
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 9
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 1.65
64 TestFunctional/serial/CacheCmd/cache/add_local 1.04
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.03
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.22
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.09
102 TestFunctional/parallel/License 0.21
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.09
116 TestFunctional/parallel/ProfileCmd/profile_list 0.08
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.08
121 TestFunctional/parallel/Version/short 0.04
128 TestFunctional/parallel/ImageCommands/Setup 1.8
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_echo-server_images 0.07
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.67
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.2
202 TestMainNoArgs 0.03
247 TestStoppedBinaryUpgrade/Setup 1.74
249 TestStoppedBinaryUpgrade/MinikubeLogs 0.77
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
267 TestNoKubernetes/serial/ProfileList 0.1
268 TestNoKubernetes/serial/Stop 3.11
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
284 TestStartStop/group/old-k8s-version/serial/Stop 3.51
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
295 TestStartStop/group/no-preload/serial/Stop 3.46
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
306 TestStartStop/group/embed-certs/serial/Stop 1.97
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.02
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
324 TestStartStop/group/newest-cni/serial/Stop 3.58
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-834000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-834000: exit status 85 (96.8725ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |          |
	|         | -p download-only-834000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 10:25:29
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 10:25:29.360394    7009 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:25:29.360546    7009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:25:29.360549    7009 out.go:304] Setting ErrFile to fd 2...
	I0805 10:25:29.360551    7009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:25:29.360684    7009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	W0805 10:25:29.360768    7009 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19374-6507/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19374-6507/.minikube/config/config.json: no such file or directory
	I0805 10:25:29.362145    7009 out.go:298] Setting JSON to true
	I0805 10:25:29.380060    7009 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5099,"bootTime":1722873630,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:25:29.380150    7009 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:25:29.384923    7009 out.go:97] [download-only-834000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:25:29.385051    7009 notify.go:220] Checking for updates...
	W0805 10:25:29.385083    7009 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball: no such file or directory
	I0805 10:25:29.387624    7009 out.go:169] MINIKUBE_LOCATION=19374
	I0805 10:25:29.391277    7009 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:25:29.395632    7009 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:25:29.398666    7009 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:25:29.401710    7009 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	W0805 10:25:29.407684    7009 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 10:25:29.407900    7009 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:25:29.411656    7009 out.go:97] Using the qemu2 driver based on user configuration
	I0805 10:25:29.411675    7009 start.go:297] selected driver: qemu2
	I0805 10:25:29.411688    7009 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:25:29.411751    7009 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:25:29.414651    7009 out.go:169] Automatically selected the socket_vmnet network
	I0805 10:25:29.419991    7009 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 10:25:29.420101    7009 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 10:25:29.420160    7009 cni.go:84] Creating CNI manager for ""
	I0805 10:25:29.420179    7009 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 10:25:29.420235    7009 start.go:340] cluster config:
	{Name:download-only-834000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-834000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:25:29.424166    7009 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:25:29.428732    7009 out.go:97] Downloading VM boot image ...
	I0805 10:25:29.428753    7009 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/iso/arm64/minikube-v1.33.1-1722248113-19339-arm64.iso
	I0805 10:25:33.912534    7009 out.go:97] Starting "download-only-834000" primary control-plane node in "download-only-834000" cluster
	I0805 10:25:33.912563    7009 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 10:25:33.969214    7009 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 10:25:33.969222    7009 cache.go:56] Caching tarball of preloaded images
	I0805 10:25:33.969371    7009 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 10:25:33.973644    7009 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0805 10:25:33.973651    7009 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:34.051778    7009 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0805 10:25:39.225091    7009 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:39.225222    7009 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:39.927063    7009 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 10:25:39.927275    7009 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/download-only-834000/config.json ...
	I0805 10:25:39.927292    7009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/download-only-834000/config.json: {Name:mk34c7f5922259b3af4097cf016aa54c3298cc91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:25:39.927962    7009 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 10:25:39.928252    7009 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0805 10:25:40.266233    7009 out.go:169] 
	W0805 10:25:40.272309    7009 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19374-6507/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0 0x1047b9aa0] Decompressors:map[bz2:0x1400048ff90 gz:0x1400048ff98 tar:0x1400048ff40 tar.bz2:0x1400048ff50 tar.gz:0x1400048ff60 tar.xz:0x1400048ff70 tar.zst:0x1400048ff80 tbz2:0x1400048ff50 tgz:0x1400048ff60 txz:0x1400048ff70 tzst:0x1400048ff80 xz:0x1400048ffa0 zip:0x1400048ffb0 zst:0x1400048ffa8] Getters:map[file:0x14000063830 http:0x14000844730 https:0x14000844780] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0805 10:25:40.272345    7009 out_reason.go:110] 
	W0805 10:25:40.280264    7009 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 10:25:40.284255    7009 out.go:169] 
	
	
	* The control-plane node download-only-834000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-834000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-834000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-998000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-998000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (12.879474625s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-998000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-998000: exit status 85 (81.442667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
	|         | -p download-only-834000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
	| delete  | -p download-only-834000        | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
	| start   | -o=json --download-only        | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
	|         | -p download-only-998000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 10:25:40
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 10:25:40.706981    7037 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:25:40.707130    7037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:25:40.707134    7037 out.go:304] Setting ErrFile to fd 2...
	I0805 10:25:40.707136    7037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:25:40.707252    7037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:25:40.708295    7037 out.go:298] Setting JSON to true
	I0805 10:25:40.727144    7037 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5110,"bootTime":1722873630,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:25:40.727220    7037 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:25:40.732143    7037 out.go:97] [download-only-998000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:25:40.732234    7037 notify.go:220] Checking for updates...
	I0805 10:25:40.736171    7037 out.go:169] MINIKUBE_LOCATION=19374
	I0805 10:25:40.739347    7037 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:25:40.743275    7037 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:25:40.746300    7037 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:25:40.749920    7037 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	W0805 10:25:40.756209    7037 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 10:25:40.756352    7037 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:25:40.759232    7037 out.go:97] Using the qemu2 driver based on user configuration
	I0805 10:25:40.759240    7037 start.go:297] selected driver: qemu2
	I0805 10:25:40.759244    7037 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:25:40.759299    7037 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:25:40.762219    7037 out.go:169] Automatically selected the socket_vmnet network
	I0805 10:25:40.767234    7037 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 10:25:40.767326    7037 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 10:25:40.767342    7037 cni.go:84] Creating CNI manager for ""
	I0805 10:25:40.767351    7037 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:25:40.767358    7037 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:25:40.767394    7037 start.go:340] cluster config:
	{Name:download-only-998000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-998000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:25:40.771073    7037 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:25:40.774208    7037 out.go:97] Starting "download-only-998000" primary control-plane node in "download-only-998000" cluster
	I0805 10:25:40.774216    7037 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:25:40.833323    7037 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:25:40.833333    7037 cache.go:56] Caching tarball of preloaded images
	I0805 10:25:40.833600    7037 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:25:40.838791    7037 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0805 10:25:40.838798    7037 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:40.919378    7037 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0805 10:25:45.212140    7037 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:45.212294    7037 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:45.755880    7037 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 10:25:45.756084    7037 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/download-only-998000/config.json ...
	I0805 10:25:45.756100    7037 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/download-only-998000/config.json: {Name:mk9ae0c5eadb6bc1a7a9e6b09916c3ad6bea5704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:25:45.756658    7037 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 10:25:45.756811    7037 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-998000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-998000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-998000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (16.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-689000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-689000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=qemu2 : (16.329133917s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (16.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-689000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-689000: exit status 85 (76.498083ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
	|         | -p download-only-834000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
	| delete  | -p download-only-834000           | download-only-834000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
	| start   | -o=json --download-only           | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
	|         | -p download-only-998000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
	| delete  | -p download-only-998000           | download-only-998000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT | 05 Aug 24 10:25 PDT |
	| start   | -o=json --download-only           | download-only-689000 | jenkins | v1.33.1 | 05 Aug 24 10:25 PDT |                     |
	|         | -p download-only-689000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 10:25:53
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 10:25:53.881730    7061 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:25:53.881850    7061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:25:53.881854    7061 out.go:304] Setting ErrFile to fd 2...
	I0805 10:25:53.881856    7061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:25:53.881996    7061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:25:53.883018    7061 out.go:298] Setting JSON to true
	I0805 10:25:53.898797    7061 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5123,"bootTime":1722873630,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:25:53.898861    7061 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:25:53.902721    7061 out.go:97] [download-only-689000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:25:53.902843    7061 notify.go:220] Checking for updates...
	I0805 10:25:53.906718    7061 out.go:169] MINIKUBE_LOCATION=19374
	I0805 10:25:53.911765    7061 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:25:53.915684    7061 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:25:53.918707    7061 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:25:53.921646    7061 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	W0805 10:25:53.927693    7061 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 10:25:53.927869    7061 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:25:53.930600    7061 out.go:97] Using the qemu2 driver based on user configuration
	I0805 10:25:53.930609    7061 start.go:297] selected driver: qemu2
	I0805 10:25:53.930613    7061 start.go:901] validating driver "qemu2" against <nil>
	I0805 10:25:53.930654    7061 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 10:25:53.933668    7061 out.go:169] Automatically selected the socket_vmnet network
	I0805 10:25:53.938877    7061 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0805 10:25:53.938960    7061 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 10:25:53.938993    7061 cni.go:84] Creating CNI manager for ""
	I0805 10:25:53.939004    7061 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 10:25:53.939013    7061 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 10:25:53.939063    7061 start.go:340] cluster config:
	{Name:download-only-689000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-689000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:25:53.942515    7061 iso.go:125] acquiring lock: {Name:mk3c732f608e41abb95f9d7b90e7a96dff21b06c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 10:25:53.945741    7061 out.go:97] Starting "download-only-689000" primary control-plane node in "download-only-689000" cluster
	I0805 10:25:53.945751    7061 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 10:25:54.022586    7061 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 10:25:54.022606    7061 cache.go:56] Caching tarball of preloaded images
	I0805 10:25:54.022902    7061 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 10:25:54.027163    7061 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0805 10:25:54.027170    7061 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:54.102253    7061 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4?checksum=md5:c1f196b49f29ebea060b9249b6cb8e03 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4
	I0805 10:25:58.321753    7061 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:58.321921    7061 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-arm64.tar.lz4 ...
	I0805 10:25:58.843914    7061 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 10:25:58.844109    7061 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/download-only-689000/config.json ...
	I0805 10:25:58.844125    7061 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19374-6507/.minikube/profiles/download-only-689000/config.json: {Name:mkd3ce031639ff48b0e64fcd32751eeca4d5d096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 10:25:58.844339    7061 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 10:25:58.844453    7061 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19374-6507/.minikube/cache/darwin/arm64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-689000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-689000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-689000
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-383000 --alsologtostderr --binary-mirror http://127.0.0.1:50983 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-383000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-383000
--- PASS: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-690000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-690000: exit status 85 (54.457125ms)

                                                
                                                
-- stdout --
	* Profile "addons-690000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-690000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-690000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-690000: exit status 85 (58.389042ms)

                                                
                                                
-- stdout --
	* Profile "addons-690000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-690000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.41s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 status: exit status 7 (30.208ms)

                                                
                                                
-- stdout --
	nospam-159000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 status: exit status 7 (29.255542ms)

                                                
                                                
-- stdout --
	nospam-159000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 status: exit status 7 (28.904875ms)

                                                
                                                
-- stdout --
	nospam-159000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.09s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 pause: exit status 83 (39.738042ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-159000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-159000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 pause: exit status 83 (39.870875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-159000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-159000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 pause: exit status 83 (38.655292ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-159000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-159000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 unpause: exit status 83 (38.859708ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-159000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-159000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 unpause: exit status 83 (38.858125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-159000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-159000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 unpause: exit status 83 (37.8605ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-159000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-159000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (9s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 stop: (3.268989333s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 stop: (3.58961325s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-159000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-159000 stop: (2.138843625s)
--- PASS: TestErrorSpam/stop (9.00s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19374-6507/.minikube/files/etc/test/nested/copy/7007/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2174536347/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache add minikube-local-cache-test:functional-908000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 cache delete minikube-local-cache-test:functional-908000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-908000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 config get cpus: exit status 14 (28.718833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 config get cpus: exit status 14 (32.538958ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.645709ms)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:27:44.805668    7548 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:27:44.805797    7548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:44.805800    7548 out.go:304] Setting ErrFile to fd 2...
	I0805 10:27:44.805802    7548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:44.805946    7548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:27:44.806937    7548 out.go:298] Setting JSON to false
	I0805 10:27:44.823100    7548 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5234,"bootTime":1722873630,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:27:44.823188    7548 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:27:44.827666    7548 out.go:177] * [functional-908000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0805 10:27:44.834459    7548 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:27:44.834513    7548 notify.go:220] Checking for updates...
	I0805 10:27:44.841615    7548 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:27:44.843019    7548 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:27:44.845548    7548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:27:44.848628    7548 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:27:44.851573    7548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:27:44.854807    7548 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:27:44.855070    7548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:27:44.859562    7548 out.go:177] * Using the qemu2 driver based on existing profile
	I0805 10:27:44.864585    7548 start.go:297] selected driver: qemu2
	I0805 10:27:44.864592    7548 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:27:44.864639    7548 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:27:44.870597    7548 out.go:177] 
	W0805 10:27:44.874579    7548 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0805 10:27:44.878550    7548 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-908000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.7475ms)

                                                
                                                
-- stdout --
	* [functional-908000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 10:27:44.688517    7544 out.go:291] Setting OutFile to fd 1 ...
	I0805 10:27:44.688643    7544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:44.688648    7544 out.go:304] Setting ErrFile to fd 2...
	I0805 10:27:44.688651    7544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 10:27:44.688798    7544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19374-6507/.minikube/bin
	I0805 10:27:44.690252    7544 out.go:298] Setting JSON to false
	I0805 10:27:44.707074    7544 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5234,"bootTime":1722873630,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0805 10:27:44.707159    7544 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 10:27:44.711948    7544 out.go:177] * [functional-908000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0805 10:27:44.718630    7544 out.go:177]   - MINIKUBE_LOCATION=19374
	I0805 10:27:44.718705    7544 notify.go:220] Checking for updates...
	I0805 10:27:44.725558    7544 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	I0805 10:27:44.728609    7544 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0805 10:27:44.731583    7544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 10:27:44.734516    7544 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	I0805 10:27:44.737615    7544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 10:27:44.740955    7544 config.go:182] Loaded profile config "functional-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 10:27:44.741232    7544 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 10:27:44.745550    7544 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0805 10:27:44.752641    7544 start.go:297] selected driver: qemu2
	I0805 10:27:44.752648    7544 start.go:901] validating driver "qemu2" against &{Name:functional-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-908000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 10:27:44.752707    7544 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 10:27:44.759625    7544 out.go:177] 
	W0805 10:27:44.763619    7544 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0805 10:27:44.766580    7544 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "45.585458ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.065042ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "45.928166ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "32.800083ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.764364709s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-908000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image rm docker.io/kicbase/echo-server:functional-908000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-908000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 image save --daemon docker.io/kicbase/echo-server:functional-908000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-908000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013817667s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-908000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-908000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-908000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-908000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.67s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-146000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-146000 --output=json --user=testUser: (3.666344833s)
--- PASS: TestJSONOutput/stop/Command (3.67s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-019000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-019000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.79625ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4d788651-e568-4c65-8b00-740550f430d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-019000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ddd00f7-ffc3-4684-9c7d-6ea089f466ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19374"}}
	{"specversion":"1.0","id":"82951780-fcef-4213-964e-399b5b775001","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig"}}
	{"specversion":"1.0","id":"34d229ce-045c-45f7-bf30-b20c968cb2ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"667cdecd-8cc9-4ad1-ae5c-f87b04d7f6b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ae10be1d-6f73-4f9b-8f68-cc94749c3719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube"}}
	{"specversion":"1.0","id":"e9d5383c-582e-4ff2-8893-ef324020f48c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a30178fe-c270-41bf-af6a-2c40a9e0c182","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-019000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-019000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-363000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-542000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-542000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (96.715541ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-542000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19374
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19374-6507/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19374-6507/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-542000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-542000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.101375ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-542000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-542000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-542000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-542000: (3.11482175s)
--- PASS: TestNoKubernetes/serial/Stop (3.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-542000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-542000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.560917ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-542000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-542000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-935000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-935000 --alsologtostderr -v=3: (3.505412125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-935000 -n old-k8s-version-935000: exit status 7 (58.498792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-935000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-519000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-519000 --alsologtostderr -v=3: (3.454836792s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-519000 -n no-preload-519000: exit status 7 (59.058834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-519000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-088000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-088000 --alsologtostderr -v=3: (1.969490208s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-088000 -n embed-certs-088000: exit status 7 (57.410959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-088000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-325000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-325000 --alsologtostderr -v=3: (2.015295s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-325000 -n default-k8s-diff-port-325000: exit status 7 (58.043541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-325000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-907000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-907000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-907000 --alsologtostderr -v=3: (3.58222425s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-907000 -n newest-cni-907000: exit status 7 (53.7905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-907000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port245482318/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722878827843061000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port245482318/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722878827843061000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port245482318/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722878827843061000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port245482318/001/test-1722878827843061000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (55.986333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.453291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.601083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.004958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.039375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.558875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.503459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p": exit status 83 (43.468ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port245482318/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4049989134/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (59.78725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.440666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.577166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (84.844333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.871041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (82.6185ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.849375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "sudo umount -f /mount-9p": exit status 83 (45.53175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-908000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port4049989134/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3495149983/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3495149983/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3495149983/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (80.628334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (82.111791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (84.601041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (88.728333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (84.580041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (87.84125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-908000 ssh "findmnt -T" /mount1: exit status 83 (84.204916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-908000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-908000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3495149983/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3495149983/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-908000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3495149983/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.41s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-810000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-810000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-810000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-810000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-810000"

                                                
                                                
----------------------- debugLogs end: cilium-810000 [took: 2.1831365s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-810000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-810000
--- SKIP: TestNetworkPlugins/group/cilium (2.29s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-026000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-026000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard