Test Report: QEMU_macOS 18384

                    
                      818397ea37b8941bfdd3d988b855153c5c099b26:2024-03-14:33567
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 41.08
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10
36 TestAddons/Setup 10.38
37 TestCertOptions 10.17
38 TestCertExpiration 195.21
39 TestDockerFlags 10.11
40 TestForceSystemdFlag 10.05
41 TestForceSystemdEnv 10.4
47 TestErrorSpam/setup 9.98
56 TestFunctional/serial/StartWithProxy 9.96
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
70 TestFunctional/serial/MinikubeKubectlCmd 0.56
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.72
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.08
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.3
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 104.18
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.5
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 38.94
150 TestMutliControlPlane/serial/StartCluster 10.2
151 TestMutliControlPlane/serial/DeployApp 78.9
152 TestMutliControlPlane/serial/PingHostFromPods 0.09
153 TestMutliControlPlane/serial/AddWorkerNode 0.08
154 TestMutliControlPlane/serial/NodeLabels 0.06
155 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.11
156 TestMutliControlPlane/serial/CopyFile 0.07
157 TestMutliControlPlane/serial/StopSecondaryNode 0.12
158 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.11
159 TestMutliControlPlane/serial/RestartSecondaryNode 35.4
160 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.11
161 TestMutliControlPlane/serial/RestartClusterKeepsNodes 8.82
162 TestMutliControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.11
164 TestMutliControlPlane/serial/StopCluster 1.95
165 TestMutliControlPlane/serial/RestartCluster 5.26
166 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.11
167 TestMutliControlPlane/serial/AddSecondaryNode 0.08
168 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.11
171 TestImageBuild/serial/Setup 9.85
174 TestJSONOutput/start/Command 9.96
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.31
206 TestMountStart/serial/StartWithMountFirst 10.57
209 TestMultiNode/serial/FreshStart2Nodes 9.99
210 TestMultiNode/serial/DeployApp2Nodes 66.85
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.08
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.11
215 TestMultiNode/serial/CopyFile 0.07
216 TestMultiNode/serial/StopNode 0.15
217 TestMultiNode/serial/StartAfterStop 54.23
218 TestMultiNode/serial/RestartKeepsNodes 9.03
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 3.32
221 TestMultiNode/serial/RestartMultiNode 5.26
222 TestMultiNode/serial/ValidateNameConflict 20.09
226 TestPreload 10.12
228 TestScheduledStopUnix 9.99
229 TestSkaffold 16.74
232 TestRunningBinaryUpgrade 633.58
234 TestKubernetesUpgrade 17.59
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.72
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.52
250 TestStoppedBinaryUpgrade/Upgrade 579.83
252 TestPause/serial/Start 9.97
262 TestNoKubernetes/serial/StartWithK8s 9.83
263 TestNoKubernetes/serial/StartWithStopK8s 5.89
264 TestNoKubernetes/serial/Start 5.88
268 TestNoKubernetes/serial/StartNoArgs 5.95
270 TestNetworkPlugins/group/auto/Start 10.18
271 TestNetworkPlugins/group/kindnet/Start 9.79
272 TestNetworkPlugins/group/calico/Start 9.76
273 TestNetworkPlugins/group/custom-flannel/Start 9.74
274 TestNetworkPlugins/group/false/Start 9.82
275 TestNetworkPlugins/group/enable-default-cni/Start 9.83
276 TestNetworkPlugins/group/flannel/Start 9.89
278 TestNetworkPlugins/group/bridge/Start 9.74
279 TestNetworkPlugins/group/kubenet/Start 9.86
281 TestStartStop/group/old-k8s-version/serial/FirstStart 10.21
283 TestStartStop/group/no-preload/serial/FirstStart 11.91
284 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
285 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.4
288 TestStartStop/group/old-k8s-version/serial/SecondStart 5.3
289 TestStartStop/group/no-preload/serial/DeployApp 0.09
290 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
293 TestStartStop/group/no-preload/serial/SecondStart 5.26
294 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
295 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
296 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
297 TestStartStop/group/old-k8s-version/serial/Pause 0.11
299 TestStartStop/group/embed-certs/serial/FirstStart 9.85
300 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
301 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
302 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
303 TestStartStop/group/no-preload/serial/Pause 0.1
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.24
306 TestStartStop/group/embed-certs/serial/DeployApp 0.09
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
310 TestStartStop/group/embed-certs/serial/SecondStart 7.36
311 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.24
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/embed-certs/serial/Pause 0.11
321 TestStartStop/group/newest-cni/serial/FirstStart 10.09
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
330 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
334 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (41.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-659000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-659000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (41.081506792s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a452cfdb-87d4-4885-a536-f000724be5f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-659000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b4bd485-5cd6-4f47-9766-d4296c204182","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18384"}}
	{"specversion":"1.0","id":"ec190517-c5e9-46d0-b830-dd27075befc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig"}}
	{"specversion":"1.0","id":"2d8d3090-5828-495b-a1d7-36df7dcb4354","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4554b653-e6ab-4841-ac9f-6c10fa3ede16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b0427a8f-1d57-4a2f-8037-512a0dd03364","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube"}}
	{"specversion":"1.0","id":"a6f26154-b6d6-406d-8754-d57360463888","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"dd3c3cbc-ae4f-4b45-94c8-b9d2029d8e2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e723780b-e03a-42ac-a452-b49882650281","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"bd5f701a-28a1-47a8-9098-e230acfbb8e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"013a5f35-07ad-40a9-8270-5dd1b17a6158","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-659000\" primary control-plane node in \"download-only-659000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5aaff212-201f-4145-bcca-625fdd3b230c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e5e09c6d-5a17-4569-a0f7-4c851ec84411","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0] Decompressors:map[bz2:0x14000893300 gz:0x14000893308 tar:0x140008932b0 tar.bz2:0x140008932c0 tar.gz:0x140008932d0 tar.xz:0x140008932e0 tar.zst:0x140008932f0 tbz2:0x140008932c0 tgz:0x1
40008932d0 txz:0x140008932e0 tzst:0x140008932f0 xz:0x14000893310 zip:0x14000893320 zst:0x14000893318] Getters:map[file:0x140023748c0 http:0x1400088a230 https:0x1400088a280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"ce65366e-8507-480b-8321-6c5e3068690f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 10:55:17.587595   11240 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:55:17.587749   11240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:55:17.587752   11240 out.go:304] Setting ErrFile to fd 2...
	I0314 10:55:17.587755   11240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:55:17.587871   11240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	W0314 10:55:17.587954   11240 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18384-10823/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18384-10823/.minikube/config/config.json: no such file or directory
	I0314 10:55:17.589235   11240 out.go:298] Setting JSON to true
	I0314 10:55:17.607161   11240 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6889,"bootTime":1710432028,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 10:55:17.607222   11240 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 10:55:17.613179   11240 out.go:97] [download-only-659000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 10:55:17.617137   11240 out.go:169] MINIKUBE_LOCATION=18384
	I0314 10:55:17.613333   11240 notify.go:220] Checking for updates...
	W0314 10:55:17.613366   11240 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball: no such file or directory
	I0314 10:55:17.625112   11240 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 10:55:17.628177   11240 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 10:55:17.631176   11240 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 10:55:17.634175   11240 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	W0314 10:55:17.640161   11240 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 10:55:17.640388   11240 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 10:55:17.644075   11240 out.go:97] Using the qemu2 driver based on user configuration
	I0314 10:55:17.644096   11240 start.go:297] selected driver: qemu2
	I0314 10:55:17.644112   11240 start.go:901] validating driver "qemu2" against <nil>
	I0314 10:55:17.644172   11240 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 10:55:17.647189   11240 out.go:169] Automatically selected the socket_vmnet network
	I0314 10:55:17.652573   11240 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0314 10:55:17.652674   11240 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 10:55:17.652786   11240 cni.go:84] Creating CNI manager for ""
	I0314 10:55:17.652805   11240 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0314 10:55:17.652856   11240 start.go:340] cluster config:
	{Name:download-only-659000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 10:55:17.657729   11240 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 10:55:17.662119   11240 out.go:97] Downloading VM boot image ...
	I0314 10:55:17.662149   11240 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso
	I0314 10:55:35.848439   11240 out.go:97] Starting "download-only-659000" primary control-plane node in "download-only-659000" cluster
	I0314 10:55:35.848471   11240 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 10:55:36.137365   11240 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0314 10:55:36.137485   11240 cache.go:56] Caching tarball of preloaded images
	I0314 10:55:36.139093   11240 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 10:55:36.144081   11240 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0314 10:55:36.144108   11240 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0314 10:55:36.744233   11240 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0314 10:55:57.544823   11240 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0314 10:55:57.545009   11240 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0314 10:55:58.245113   11240 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0314 10:55:58.245312   11240 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/download-only-659000/config.json ...
	I0314 10:55:58.245331   11240 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/download-only-659000/config.json: {Name:mk97c40282b2ef2a1091f4503050bda7aec3a889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 10:55:58.246583   11240 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 10:55:58.246754   11240 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0314 10:55:58.590165   11240 out.go:169] 
	W0314 10:55:58.594149   11240 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0] Decompressors:map[bz2:0x14000893300 gz:0x14000893308 tar:0x140008932b0 tar.bz2:0x140008932c0 tar.gz:0x140008932d0 tar.xz:0x140008932e0 tar.zst:0x140008932f0 tbz2:0x140008932c0 tgz:0x140008932d0 txz:0x140008932e0 tzst:0x140008932f0 xz:0x14000893310 zip:0x14000893320 zst:0x14000893318] Getters:map[file:0x140023748c0 http:0x1400088a230 https:0x1400088a280] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0314 10:55:58.594178   11240 out_reason.go:110] 
	W0314 10:55:58.602175   11240 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 10:55:58.606099   11240 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-659000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (41.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-012000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-012000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.8236025s)

                                                
                                                
-- stdout --
	* [offline-docker-012000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-012000" primary control-plane node in "offline-docker-012000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-012000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:07:12.790204   12826 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:07:12.790336   12826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:07:12.790339   12826 out.go:304] Setting ErrFile to fd 2...
	I0314 11:07:12.790342   12826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:07:12.790468   12826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:07:12.791558   12826 out.go:298] Setting JSON to false
	I0314 11:07:12.809609   12826 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7604,"bootTime":1710432028,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:07:12.809688   12826 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:07:12.813895   12826 out.go:177] * [offline-docker-012000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:07:12.820715   12826 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:07:12.823723   12826 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:07:12.820730   12826 notify.go:220] Checking for updates...
	I0314 11:07:12.829637   12826 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:07:12.832678   12826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:07:12.835580   12826 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:07:12.838668   12826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:07:12.842042   12826 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:07:12.842104   12826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:07:12.844593   12826 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:07:12.851699   12826 start.go:297] selected driver: qemu2
	I0314 11:07:12.851709   12826 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:07:12.851716   12826 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:07:12.853924   12826 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:07:12.855129   12826 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:07:12.857709   12826 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:07:12.857741   12826 cni.go:84] Creating CNI manager for ""
	I0314 11:07:12.857748   12826 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:07:12.857752   12826 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:07:12.857790   12826 start.go:340] cluster config:
	{Name:offline-docker-012000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-012000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:07:12.862448   12826 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:07:12.869616   12826 out.go:177] * Starting "offline-docker-012000" primary control-plane node in "offline-docker-012000" cluster
	I0314 11:07:12.873625   12826 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:07:12.873662   12826 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:07:12.873673   12826 cache.go:56] Caching tarball of preloaded images
	I0314 11:07:12.873750   12826 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:07:12.873755   12826 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:07:12.873820   12826 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/offline-docker-012000/config.json ...
	I0314 11:07:12.873831   12826 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/offline-docker-012000/config.json: {Name:mke1aa8451fb22da26b9ca16436892cceb563812 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:07:12.874109   12826 start.go:360] acquireMachinesLock for offline-docker-012000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:07:12.874138   12826 start.go:364] duration metric: took 22.416µs to acquireMachinesLock for "offline-docker-012000"
	I0314 11:07:12.874148   12826 start.go:93] Provisioning new machine with config: &{Name:offline-docker-012000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-012000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:07:12.874182   12826 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:07:12.878639   12826 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0314 11:07:12.893954   12826 start.go:159] libmachine.API.Create for "offline-docker-012000" (driver="qemu2")
	I0314 11:07:12.893983   12826 client.go:168] LocalClient.Create starting
	I0314 11:07:12.894066   12826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:07:12.894099   12826 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:12.894113   12826 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:12.894161   12826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:07:12.894183   12826 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:12.894195   12826 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:12.894570   12826 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:07:13.039952   12826 main.go:141] libmachine: Creating SSH key...
	I0314 11:07:13.087253   12826 main.go:141] libmachine: Creating Disk image...
	I0314 11:07:13.087260   12826 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:07:13.087455   12826 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2
	I0314 11:07:13.100215   12826 main.go:141] libmachine: STDOUT: 
	I0314 11:07:13.100245   12826 main.go:141] libmachine: STDERR: 
	I0314 11:07:13.100304   12826 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2 +20000M
	I0314 11:07:13.113168   12826 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:07:13.113204   12826 main.go:141] libmachine: STDERR: 
	I0314 11:07:13.113222   12826 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2
	I0314 11:07:13.113236   12826 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:07:13.113269   12826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:f0:ea:97:7e:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2
	I0314 11:07:13.115645   12826 main.go:141] libmachine: STDOUT: 
	I0314 11:07:13.115670   12826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:07:13.115697   12826 client.go:171] duration metric: took 221.70975ms to LocalClient.Create
	I0314 11:07:15.117722   12826 start.go:128] duration metric: took 2.243575708s to createHost
	I0314 11:07:15.117745   12826 start.go:83] releasing machines lock for "offline-docker-012000", held for 2.243644625s
	W0314 11:07:15.117765   12826 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:15.126550   12826 out.go:177] * Deleting "offline-docker-012000" in qemu2 ...
	W0314 11:07:15.140789   12826 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:15.140798   12826 start.go:728] Will try again in 5 seconds ...
	I0314 11:07:20.143019   12826 start.go:360] acquireMachinesLock for offline-docker-012000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:07:20.143657   12826 start.go:364] duration metric: took 429.666µs to acquireMachinesLock for "offline-docker-012000"
	I0314 11:07:20.143795   12826 start.go:93] Provisioning new machine with config: &{Name:offline-docker-012000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-012000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:07:20.144138   12826 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:07:20.156603   12826 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0314 11:07:20.215068   12826 start.go:159] libmachine.API.Create for "offline-docker-012000" (driver="qemu2")
	I0314 11:07:20.215113   12826 client.go:168] LocalClient.Create starting
	I0314 11:07:20.215224   12826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:07:20.215285   12826 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:20.215303   12826 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:20.215395   12826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:07:20.215436   12826 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:20.215448   12826 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:20.215904   12826 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:07:20.365659   12826 main.go:141] libmachine: Creating SSH key...
	I0314 11:07:20.511595   12826 main.go:141] libmachine: Creating Disk image...
	I0314 11:07:20.511602   12826 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:07:20.511815   12826 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2
	I0314 11:07:20.524532   12826 main.go:141] libmachine: STDOUT: 
	I0314 11:07:20.524550   12826 main.go:141] libmachine: STDERR: 
	I0314 11:07:20.524597   12826 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2 +20000M
	I0314 11:07:20.534896   12826 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:07:20.534914   12826 main.go:141] libmachine: STDERR: 
	I0314 11:07:20.534933   12826 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2
	I0314 11:07:20.534940   12826 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:07:20.534968   12826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:95:ae:b4:1b:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/offline-docker-012000/disk.qcow2
	I0314 11:07:20.536691   12826 main.go:141] libmachine: STDOUT: 
	I0314 11:07:20.536707   12826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:07:20.536727   12826 client.go:171] duration metric: took 321.614416ms to LocalClient.Create
	I0314 11:07:22.538843   12826 start.go:128] duration metric: took 2.394709208s to createHost
	I0314 11:07:22.538955   12826 start.go:83] releasing machines lock for "offline-docker-012000", held for 2.395280792s
	W0314 11:07:22.539290   12826 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-012000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-012000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:22.552476   12826 out.go:177] 
	W0314 11:07:22.556619   12826 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:07:22.556641   12826 out.go:239] * 
	* 
	W0314 11:07:22.558153   12826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:07:22.565509   12826 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-012000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-14 11:07:22.580599 -0700 PDT m=+725.092512001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-012000 -n offline-docker-012000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-012000 -n offline-docker-012000: exit status 7 (71.723875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-012000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-012000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-012000
--- FAIL: TestOffline (10.00s)

                                                
                                    
x
+
TestAddons/Setup (10.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-532000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-532000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.378630167s)

                                                
                                                
-- stdout --
	* [addons-532000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-532000" primary control-plane node in "addons-532000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-532000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 10:56:48.705486   11405 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:56:48.705613   11405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:56:48.705616   11405 out.go:304] Setting ErrFile to fd 2...
	I0314 10:56:48.705618   11405 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:56:48.705746   11405 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 10:56:48.706826   11405 out.go:298] Setting JSON to false
	I0314 10:56:48.722935   11405 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6980,"bootTime":1710432028,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 10:56:48.722994   11405 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 10:56:48.727895   11405 out.go:177] * [addons-532000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 10:56:48.734852   11405 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 10:56:48.738821   11405 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 10:56:48.734881   11405 notify.go:220] Checking for updates...
	I0314 10:56:48.744812   11405 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 10:56:48.747826   11405 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 10:56:48.750920   11405 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 10:56:48.753837   11405 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 10:56:48.756984   11405 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 10:56:48.760858   11405 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 10:56:48.767853   11405 start.go:297] selected driver: qemu2
	I0314 10:56:48.767858   11405 start.go:901] validating driver "qemu2" against <nil>
	I0314 10:56:48.767863   11405 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 10:56:48.770094   11405 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 10:56:48.772837   11405 out.go:177] * Automatically selected the socket_vmnet network
	I0314 10:56:48.775960   11405 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 10:56:48.776017   11405 cni.go:84] Creating CNI manager for ""
	I0314 10:56:48.776025   11405 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 10:56:48.776029   11405 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 10:56:48.776076   11405 start.go:340] cluster config:
	{Name:addons-532000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 10:56:48.780715   11405 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 10:56:48.787855   11405 out.go:177] * Starting "addons-532000" primary control-plane node in "addons-532000" cluster
	I0314 10:56:48.791823   11405 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 10:56:48.791836   11405 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 10:56:48.791847   11405 cache.go:56] Caching tarball of preloaded images
	I0314 10:56:48.791904   11405 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 10:56:48.791910   11405 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 10:56:48.792165   11405 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/addons-532000/config.json ...
	I0314 10:56:48.792177   11405 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/addons-532000/config.json: {Name:mk9a06bf3cbef61b7508f6366c9724e6c12e4e19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 10:56:48.792399   11405 start.go:360] acquireMachinesLock for addons-532000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 10:56:48.792504   11405 start.go:364] duration metric: took 99.125µs to acquireMachinesLock for "addons-532000"
	I0314 10:56:48.792518   11405 start.go:93] Provisioning new machine with config: &{Name:addons-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 10:56:48.792551   11405 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 10:56:48.799889   11405 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0314 10:56:48.820823   11405 start.go:159] libmachine.API.Create for "addons-532000" (driver="qemu2")
	I0314 10:56:48.820863   11405 client.go:168] LocalClient.Create starting
	I0314 10:56:48.820998   11405 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 10:56:49.121217   11405 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 10:56:49.239348   11405 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 10:56:49.473678   11405 main.go:141] libmachine: Creating SSH key...
	I0314 10:56:49.559709   11405 main.go:141] libmachine: Creating Disk image...
	I0314 10:56:49.559717   11405 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 10:56:49.559932   11405 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2
	I0314 10:56:49.572559   11405 main.go:141] libmachine: STDOUT: 
	I0314 10:56:49.572590   11405 main.go:141] libmachine: STDERR: 
	I0314 10:56:49.572645   11405 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2 +20000M
	I0314 10:56:49.583318   11405 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 10:56:49.583342   11405 main.go:141] libmachine: STDERR: 
	I0314 10:56:49.583360   11405 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2
	I0314 10:56:49.583364   11405 main.go:141] libmachine: Starting QEMU VM...
	I0314 10:56:49.583398   11405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:ac:ce:5d:5a:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2
	I0314 10:56:49.585010   11405 main.go:141] libmachine: STDOUT: 
	I0314 10:56:49.585025   11405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 10:56:49.585046   11405 client.go:171] duration metric: took 764.185833ms to LocalClient.Create
	I0314 10:56:51.587279   11405 start.go:128] duration metric: took 2.794730375s to createHost
	I0314 10:56:51.587379   11405 start.go:83] releasing machines lock for "addons-532000", held for 2.794914792s
	W0314 10:56:51.587432   11405 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 10:56:51.603734   11405 out.go:177] * Deleting "addons-532000" in qemu2 ...
	W0314 10:56:51.629599   11405 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 10:56:51.629628   11405 start.go:728] Will try again in 5 seconds ...
	I0314 10:56:56.631722   11405 start.go:360] acquireMachinesLock for addons-532000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 10:56:56.632093   11405 start.go:364] duration metric: took 300.583µs to acquireMachinesLock for "addons-532000"
	I0314 10:56:56.632187   11405 start.go:93] Provisioning new machine with config: &{Name:addons-532000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-532000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 10:56:56.632430   11405 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 10:56:56.643078   11405 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0314 10:56:56.691583   11405 start.go:159] libmachine.API.Create for "addons-532000" (driver="qemu2")
	I0314 10:56:56.691623   11405 client.go:168] LocalClient.Create starting
	I0314 10:56:56.691721   11405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 10:56:56.691776   11405 main.go:141] libmachine: Decoding PEM data...
	I0314 10:56:56.691794   11405 main.go:141] libmachine: Parsing certificate...
	I0314 10:56:56.691857   11405 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 10:56:56.691898   11405 main.go:141] libmachine: Decoding PEM data...
	I0314 10:56:56.691911   11405 main.go:141] libmachine: Parsing certificate...
	I0314 10:56:56.692470   11405 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 10:56:56.846508   11405 main.go:141] libmachine: Creating SSH key...
	I0314 10:56:56.987760   11405 main.go:141] libmachine: Creating Disk image...
	I0314 10:56:56.987773   11405 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 10:56:56.987982   11405 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2
	I0314 10:56:57.000398   11405 main.go:141] libmachine: STDOUT: 
	I0314 10:56:57.000425   11405 main.go:141] libmachine: STDERR: 
	I0314 10:56:57.000483   11405 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2 +20000M
	I0314 10:56:57.011077   11405 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 10:56:57.011114   11405 main.go:141] libmachine: STDERR: 
	I0314 10:56:57.011130   11405 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2
	I0314 10:56:57.011137   11405 main.go:141] libmachine: Starting QEMU VM...
	I0314 10:56:57.011169   11405 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:0d:f7:30:7a:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/addons-532000/disk.qcow2
	I0314 10:56:57.012974   11405 main.go:141] libmachine: STDOUT: 
	I0314 10:56:57.012991   11405 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 10:56:57.013006   11405 client.go:171] duration metric: took 321.383708ms to LocalClient.Create
	I0314 10:56:59.013284   11405 start.go:128] duration metric: took 2.380841084s to createHost
	I0314 10:56:59.013340   11405 start.go:83] releasing machines lock for "addons-532000", held for 2.381274125s
	W0314 10:56:59.013672   11405 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-532000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-532000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 10:56:59.025278   11405 out.go:177] 
	W0314 10:56:59.028175   11405 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 10:56:59.028221   11405 out.go:239] * 
	* 
	W0314 10:56:59.030821   11405 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 10:56:59.039164   11405 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-532000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.38s)

                                                
                                    
x
+
TestCertOptions (10.17s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-764000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-764000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.81711s)

                                                
                                                
-- stdout --
	* [cert-options-764000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-764000" primary control-plane node in "cert-options-764000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-764000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-764000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-764000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-764000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-764000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (81.340875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-764000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-764000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-764000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-764000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-764000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-764000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (39.937459ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-764000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-764000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-764000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-764000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-764000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-14 11:07:53.296542 -0700 PDT m=+755.809033168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-764000 -n cert-options-764000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-764000 -n cert-options-764000: exit status 7 (32.047125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-764000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-764000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-764000
--- FAIL: TestCertOptions (10.17s)

                                                
                                    
x
+
TestCertExpiration (195.21s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-802000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-802000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.753341625s)

                                                
                                                
-- stdout --
	* [cert-expiration-802000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-802000" primary control-plane node in "cert-expiration-802000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-802000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-802000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-802000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-802000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-802000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.2336435s)

                                                
                                                
-- stdout --
	* [cert-expiration-802000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-802000" primary control-plane node in "cert-expiration-802000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-802000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-802000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-802000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-802000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-802000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-802000" primary control-plane node in "cert-expiration-802000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-802000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-802000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-802000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-14 11:10:53.214829 -0700 PDT m=+935.730704335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-802000 -n cert-expiration-802000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-802000 -n cert-expiration-802000: exit status 7 (56.944584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-802000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-802000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-802000
--- FAIL: TestCertExpiration (195.21s)

                                                
                                    
x
+
TestDockerFlags (10.11s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-378000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-378000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.848822667s)

                                                
                                                
-- stdout --
	* [docker-flags-378000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-378000" primary control-plane node in "docker-flags-378000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-378000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:07:33.185764   13026 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:07:33.185882   13026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:07:33.185885   13026 out.go:304] Setting ErrFile to fd 2...
	I0314 11:07:33.185888   13026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:07:33.186008   13026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:07:33.187051   13026 out.go:298] Setting JSON to false
	I0314 11:07:33.203143   13026 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7625,"bootTime":1710432028,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:07:33.203196   13026 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:07:33.208051   13026 out.go:177] * [docker-flags-378000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:07:33.224025   13026 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:07:33.215108   13026 notify.go:220] Checking for updates...
	I0314 11:07:33.230035   13026 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:07:33.233033   13026 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:07:33.236052   13026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:07:33.238940   13026 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:07:33.242005   13026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:07:33.245389   13026 config.go:182] Loaded profile config "force-systemd-flag-911000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:07:33.245461   13026 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:07:33.245512   13026 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:07:33.248980   13026 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:07:33.256015   13026 start.go:297] selected driver: qemu2
	I0314 11:07:33.256020   13026 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:07:33.256024   13026 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:07:33.258230   13026 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:07:33.259487   13026 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:07:33.262052   13026 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0314 11:07:33.262072   13026 cni.go:84] Creating CNI manager for ""
	I0314 11:07:33.262077   13026 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:07:33.262086   13026 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:07:33.262117   13026 start.go:340] cluster config:
	{Name:docker-flags-378000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-378000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:07:33.266649   13026 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:07:33.275012   13026 out.go:177] * Starting "docker-flags-378000" primary control-plane node in "docker-flags-378000" cluster
	I0314 11:07:33.279029   13026 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:07:33.279043   13026 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:07:33.279058   13026 cache.go:56] Caching tarball of preloaded images
	I0314 11:07:33.279135   13026 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:07:33.279140   13026 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:07:33.279209   13026 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/docker-flags-378000/config.json ...
	I0314 11:07:33.279225   13026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/docker-flags-378000/config.json: {Name:mkb937954e14d1321d02fc630a368e0c7f138ce4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:07:33.279435   13026 start.go:360] acquireMachinesLock for docker-flags-378000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:07:33.279470   13026 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "docker-flags-378000"
	I0314 11:07:33.279482   13026 start.go:93] Provisioning new machine with config: &{Name:docker-flags-378000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-378000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:07:33.279508   13026 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:07:33.287013   13026 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0314 11:07:33.305661   13026 start.go:159] libmachine.API.Create for "docker-flags-378000" (driver="qemu2")
	I0314 11:07:33.305688   13026 client.go:168] LocalClient.Create starting
	I0314 11:07:33.305748   13026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:07:33.305779   13026 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:33.305788   13026 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:33.305840   13026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:07:33.305867   13026 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:33.305873   13026 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:33.306321   13026 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:07:33.452441   13026 main.go:141] libmachine: Creating SSH key...
	I0314 11:07:33.486217   13026 main.go:141] libmachine: Creating Disk image...
	I0314 11:07:33.486222   13026 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:07:33.486413   13026 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2
	I0314 11:07:33.498693   13026 main.go:141] libmachine: STDOUT: 
	I0314 11:07:33.498717   13026 main.go:141] libmachine: STDERR: 
	I0314 11:07:33.498765   13026 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2 +20000M
	I0314 11:07:33.509258   13026 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:07:33.509275   13026 main.go:141] libmachine: STDERR: 
	I0314 11:07:33.509290   13026 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2
	I0314 11:07:33.509295   13026 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:07:33.509330   13026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:6a:28:75:88:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2
	I0314 11:07:33.510942   13026 main.go:141] libmachine: STDOUT: 
	I0314 11:07:33.510957   13026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:07:33.510981   13026 client.go:171] duration metric: took 205.286459ms to LocalClient.Create
	I0314 11:07:35.513193   13026 start.go:128] duration metric: took 2.233695583s to createHost
	I0314 11:07:35.513308   13026 start.go:83] releasing machines lock for "docker-flags-378000", held for 2.233848292s
	W0314 11:07:35.513369   13026 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:35.523432   13026 out.go:177] * Deleting "docker-flags-378000" in qemu2 ...
	W0314 11:07:35.553620   13026 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:35.553651   13026 start.go:728] Will try again in 5 seconds ...
	I0314 11:07:40.555829   13026 start.go:360] acquireMachinesLock for docker-flags-378000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:07:40.576509   13026 start.go:364] duration metric: took 20.533083ms to acquireMachinesLock for "docker-flags-378000"
	I0314 11:07:40.576600   13026 start.go:93] Provisioning new machine with config: &{Name:docker-flags-378000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-378000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:07:40.576824   13026 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:07:40.586380   13026 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0314 11:07:40.632039   13026 start.go:159] libmachine.API.Create for "docker-flags-378000" (driver="qemu2")
	I0314 11:07:40.632097   13026 client.go:168] LocalClient.Create starting
	I0314 11:07:40.632262   13026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:07:40.632337   13026 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:40.632357   13026 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:40.632447   13026 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:07:40.632494   13026 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:40.632508   13026 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:40.633208   13026 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:07:40.787837   13026 main.go:141] libmachine: Creating SSH key...
	I0314 11:07:40.930710   13026 main.go:141] libmachine: Creating Disk image...
	I0314 11:07:40.930716   13026 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:07:40.930931   13026 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2
	I0314 11:07:40.943116   13026 main.go:141] libmachine: STDOUT: 
	I0314 11:07:40.943136   13026 main.go:141] libmachine: STDERR: 
	I0314 11:07:40.943187   13026 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2 +20000M
	I0314 11:07:40.953729   13026 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:07:40.953746   13026 main.go:141] libmachine: STDERR: 
	I0314 11:07:40.953763   13026 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2
	I0314 11:07:40.953768   13026 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:07:40.953809   13026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:40:78:fe:bb:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/docker-flags-378000/disk.qcow2
	I0314 11:07:40.955502   13026 main.go:141] libmachine: STDOUT: 
	I0314 11:07:40.955528   13026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:07:40.955541   13026 client.go:171] duration metric: took 323.445334ms to LocalClient.Create
	I0314 11:07:42.957689   13026 start.go:128] duration metric: took 2.380884708s to createHost
	I0314 11:07:42.957737   13026 start.go:83] releasing machines lock for "docker-flags-378000", held for 2.381237417s
	W0314 11:07:42.958024   13026 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-378000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-378000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:42.966977   13026 out.go:177] 
	W0314 11:07:42.974656   13026 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:07:42.974678   13026 out.go:239] * 
	* 
	W0314 11:07:42.975962   13026 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:07:42.989500   13026 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-378000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-378000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-378000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.189583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-378000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-378000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-378000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-378000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-378000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-378000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-378000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-378000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-378000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (43.774875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-378000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-378000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-378000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-378000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-378000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-378000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-14 11:07:43.129647 -0700 PDT m=+745.641946585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-378000 -n docker-flags-378000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-378000 -n docker-flags-378000: exit status 7 (30.887875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-378000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-378000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-378000
--- FAIL: TestDockerFlags (10.11s)

                                                
                                    
x
+
TestForceSystemdFlag (10.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-911000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-911000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.824759666s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-911000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-911000" primary control-plane node in "force-systemd-flag-911000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-911000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:07:28.211698   13002 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:07:28.211825   13002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:07:28.211829   13002 out.go:304] Setting ErrFile to fd 2...
	I0314 11:07:28.211831   13002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:07:28.211948   13002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:07:28.212935   13002 out.go:298] Setting JSON to false
	I0314 11:07:28.229355   13002 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7620,"bootTime":1710432028,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:07:28.229438   13002 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:07:28.235582   13002 out.go:177] * [force-systemd-flag-911000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:07:28.241446   13002 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:07:28.245492   13002 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:07:28.241494   13002 notify.go:220] Checking for updates...
	I0314 11:07:28.246861   13002 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:07:28.249427   13002 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:07:28.252486   13002 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:07:28.255469   13002 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:07:28.258866   13002 config.go:182] Loaded profile config "force-systemd-env-600000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:07:28.258934   13002 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:07:28.258980   13002 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:07:28.263420   13002 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:07:28.270428   13002 start.go:297] selected driver: qemu2
	I0314 11:07:28.270433   13002 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:07:28.270438   13002 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:07:28.272674   13002 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:07:28.275426   13002 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:07:28.278531   13002 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 11:07:28.278546   13002 cni.go:84] Creating CNI manager for ""
	I0314 11:07:28.278553   13002 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:07:28.278563   13002 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:07:28.278589   13002 start.go:340] cluster config:
	{Name:force-systemd-flag-911000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:07:28.283014   13002 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:07:28.294514   13002 out.go:177] * Starting "force-systemd-flag-911000" primary control-plane node in "force-systemd-flag-911000" cluster
	I0314 11:07:28.298405   13002 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:07:28.298423   13002 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:07:28.298435   13002 cache.go:56] Caching tarball of preloaded images
	I0314 11:07:28.298485   13002 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:07:28.298491   13002 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:07:28.298546   13002 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/force-systemd-flag-911000/config.json ...
	I0314 11:07:28.298557   13002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/force-systemd-flag-911000/config.json: {Name:mk413ad54669bf41c46a7295d6f763022fe234bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:07:28.298781   13002 start.go:360] acquireMachinesLock for force-systemd-flag-911000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:07:28.298815   13002 start.go:364] duration metric: took 25.5µs to acquireMachinesLock for "force-systemd-flag-911000"
	I0314 11:07:28.298829   13002 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-911000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:07:28.298864   13002 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:07:28.306448   13002 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0314 11:07:28.323852   13002 start.go:159] libmachine.API.Create for "force-systemd-flag-911000" (driver="qemu2")
	I0314 11:07:28.323883   13002 client.go:168] LocalClient.Create starting
	I0314 11:07:28.323962   13002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:07:28.323995   13002 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:28.324005   13002 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:28.324044   13002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:07:28.324067   13002 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:28.324074   13002 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:28.324428   13002 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:07:28.468551   13002 main.go:141] libmachine: Creating SSH key...
	I0314 11:07:28.524163   13002 main.go:141] libmachine: Creating Disk image...
	I0314 11:07:28.524173   13002 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:07:28.524367   13002 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2
	I0314 11:07:28.536820   13002 main.go:141] libmachine: STDOUT: 
	I0314 11:07:28.536838   13002 main.go:141] libmachine: STDERR: 
	I0314 11:07:28.536894   13002 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2 +20000M
	I0314 11:07:28.547290   13002 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:07:28.547306   13002 main.go:141] libmachine: STDERR: 
	I0314 11:07:28.547316   13002 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2
	I0314 11:07:28.547322   13002 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:07:28.547357   13002 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:a0:28:39:68:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2
	I0314 11:07:28.549003   13002 main.go:141] libmachine: STDOUT: 
	I0314 11:07:28.549018   13002 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:07:28.549036   13002 client.go:171] duration metric: took 225.150959ms to LocalClient.Create
	I0314 11:07:30.549405   13002 start.go:128] duration metric: took 2.250546708s to createHost
	I0314 11:07:30.549501   13002 start.go:83] releasing machines lock for "force-systemd-flag-911000", held for 2.250718291s
	W0314 11:07:30.549603   13002 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:30.570607   13002 out.go:177] * Deleting "force-systemd-flag-911000" in qemu2 ...
	W0314 11:07:30.591995   13002 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:30.592024   13002 start.go:728] Will try again in 5 seconds ...
	I0314 11:07:35.594166   13002 start.go:360] acquireMachinesLock for force-systemd-flag-911000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:07:35.594478   13002 start.go:364] duration metric: took 236.583µs to acquireMachinesLock for "force-systemd-flag-911000"
	I0314 11:07:35.594563   13002 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-911000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-911000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:07:35.594760   13002 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:07:35.603480   13002 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0314 11:07:35.648132   13002 start.go:159] libmachine.API.Create for "force-systemd-flag-911000" (driver="qemu2")
	I0314 11:07:35.648186   13002 client.go:168] LocalClient.Create starting
	I0314 11:07:35.648323   13002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:07:35.648391   13002 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:35.648410   13002 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:35.648508   13002 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:07:35.648577   13002 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:35.648605   13002 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:35.649545   13002 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:07:35.808844   13002 main.go:141] libmachine: Creating SSH key...
	I0314 11:07:35.936008   13002 main.go:141] libmachine: Creating Disk image...
	I0314 11:07:35.936014   13002 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:07:35.936223   13002 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2
	I0314 11:07:35.948569   13002 main.go:141] libmachine: STDOUT: 
	I0314 11:07:35.948591   13002 main.go:141] libmachine: STDERR: 
	I0314 11:07:35.948659   13002 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2 +20000M
	I0314 11:07:35.959224   13002 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:07:35.959240   13002 main.go:141] libmachine: STDERR: 
	I0314 11:07:35.959265   13002 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2
	I0314 11:07:35.959271   13002 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:07:35.959314   13002 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:9a:b8:2e:4b:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-flag-911000/disk.qcow2
	I0314 11:07:35.961358   13002 main.go:141] libmachine: STDOUT: 
	I0314 11:07:35.961408   13002 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:07:35.961421   13002 client.go:171] duration metric: took 313.235208ms to LocalClient.Create
	I0314 11:07:37.963636   13002 start.go:128] duration metric: took 2.368865833s to createHost
	I0314 11:07:37.963725   13002 start.go:83] releasing machines lock for "force-systemd-flag-911000", held for 2.369276208s
	W0314 11:07:37.964036   13002 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-911000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-911000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:37.976524   13002 out.go:177] 
	W0314 11:07:37.980604   13002 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:07:37.980617   13002 out.go:239] * 
	* 
	W0314 11:07:37.982083   13002 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:07:37.993506   13002 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-911000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-911000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-911000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.636041ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-911000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-911000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-911000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-14 11:07:38.089874 -0700 PDT m=+740.602079085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-911000 -n force-systemd-flag-911000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-911000 -n force-systemd-flag-911000: exit status 7 (36.921375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-911000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-911000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-911000
--- FAIL: TestForceSystemdFlag (10.05s)

                                                
                                    
x
+
TestForceSystemdEnv (10.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-600000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-600000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.174702s)

                                                
                                                
-- stdout --
	* [force-systemd-env-600000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-600000" primary control-plane node in "force-systemd-env-600000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-600000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:07:22.790795   12966 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:07:22.790947   12966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:07:22.790950   12966 out.go:304] Setting ErrFile to fd 2...
	I0314 11:07:22.790953   12966 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:07:22.791080   12966 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:07:22.792156   12966 out.go:298] Setting JSON to false
	I0314 11:07:22.809297   12966 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7614,"bootTime":1710432028,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:07:22.809357   12966 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:07:22.815585   12966 out.go:177] * [force-systemd-env-600000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:07:22.826361   12966 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:07:22.822569   12966 notify.go:220] Checking for updates...
	I0314 11:07:22.843523   12966 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:07:22.851447   12966 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:07:22.859463   12966 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:07:22.867497   12966 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:07:22.875472   12966 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0314 11:07:22.881385   12966 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:07:22.881447   12966 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:07:22.885504   12966 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:07:22.892346   12966 start.go:297] selected driver: qemu2
	I0314 11:07:22.892358   12966 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:07:22.892365   12966 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:07:22.895153   12966 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:07:22.898555   12966 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:07:22.902635   12966 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 11:07:22.902676   12966 cni.go:84] Creating CNI manager for ""
	I0314 11:07:22.902687   12966 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:07:22.902698   12966 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:07:22.902741   12966 start.go:340] cluster config:
	{Name:force-systemd-env-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:07:22.907933   12966 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:07:22.914474   12966 out.go:177] * Starting "force-systemd-env-600000" primary control-plane node in "force-systemd-env-600000" cluster
	I0314 11:07:22.918491   12966 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:07:22.918508   12966 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:07:22.918523   12966 cache.go:56] Caching tarball of preloaded images
	I0314 11:07:22.918590   12966 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:07:22.918611   12966 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:07:22.918688   12966 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/force-systemd-env-600000/config.json ...
	I0314 11:07:22.918703   12966 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/force-systemd-env-600000/config.json: {Name:mk57e7924056f0b590a44d400e32be1a77c0062c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:07:22.918911   12966 start.go:360] acquireMachinesLock for force-systemd-env-600000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:07:22.918946   12966 start.go:364] duration metric: took 27.958µs to acquireMachinesLock for "force-systemd-env-600000"
	I0314 11:07:22.918958   12966 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:07:22.918991   12966 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:07:22.927543   12966 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0314 11:07:22.944743   12966 start.go:159] libmachine.API.Create for "force-systemd-env-600000" (driver="qemu2")
	I0314 11:07:22.944769   12966 client.go:168] LocalClient.Create starting
	I0314 11:07:22.944828   12966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:07:22.944857   12966 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:22.944868   12966 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:22.944910   12966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:07:22.944931   12966 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:22.944939   12966 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:22.945302   12966 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:07:23.088043   12966 main.go:141] libmachine: Creating SSH key...
	I0314 11:07:23.143836   12966 main.go:141] libmachine: Creating Disk image...
	I0314 11:07:23.143841   12966 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:07:23.144063   12966 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2
	I0314 11:07:23.161828   12966 main.go:141] libmachine: STDOUT: 
	I0314 11:07:23.161852   12966 main.go:141] libmachine: STDERR: 
	I0314 11:07:23.161920   12966 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2 +20000M
	I0314 11:07:23.173088   12966 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:07:23.173104   12966 main.go:141] libmachine: STDERR: 
	I0314 11:07:23.173121   12966 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2
	I0314 11:07:23.173125   12966 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:07:23.173157   12966 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:e4:6f:48:45:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2
	I0314 11:07:23.175408   12966 main.go:141] libmachine: STDOUT: 
	I0314 11:07:23.175423   12966 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:07:23.175442   12966 client.go:171] duration metric: took 230.671375ms to LocalClient.Create
	I0314 11:07:25.177713   12966 start.go:128] duration metric: took 2.258732583s to createHost
	I0314 11:07:25.177787   12966 start.go:83] releasing machines lock for "force-systemd-env-600000", held for 2.258877042s
	W0314 11:07:25.177820   12966 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:25.186504   12966 out.go:177] * Deleting "force-systemd-env-600000" in qemu2 ...
	W0314 11:07:25.224230   12966 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:25.224260   12966 start.go:728] Will try again in 5 seconds ...
	I0314 11:07:30.226313   12966 start.go:360] acquireMachinesLock for force-systemd-env-600000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:07:30.549678   12966 start.go:364] duration metric: took 323.277083ms to acquireMachinesLock for "force-systemd-env-600000"
	I0314 11:07:30.549803   12966 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:07:30.550038   12966 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:07:30.562536   12966 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0314 11:07:30.610425   12966 start.go:159] libmachine.API.Create for "force-systemd-env-600000" (driver="qemu2")
	I0314 11:07:30.610474   12966 client.go:168] LocalClient.Create starting
	I0314 11:07:30.610583   12966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:07:30.610651   12966 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:30.610665   12966 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:30.610732   12966 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:07:30.610774   12966 main.go:141] libmachine: Decoding PEM data...
	I0314 11:07:30.610784   12966 main.go:141] libmachine: Parsing certificate...
	I0314 11:07:30.611257   12966 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:07:30.763690   12966 main.go:141] libmachine: Creating SSH key...
	I0314 11:07:30.859539   12966 main.go:141] libmachine: Creating Disk image...
	I0314 11:07:30.859544   12966 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:07:30.859756   12966 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2
	I0314 11:07:30.872261   12966 main.go:141] libmachine: STDOUT: 
	I0314 11:07:30.872280   12966 main.go:141] libmachine: STDERR: 
	I0314 11:07:30.872331   12966 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2 +20000M
	I0314 11:07:30.883056   12966 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:07:30.883075   12966 main.go:141] libmachine: STDERR: 
	I0314 11:07:30.883097   12966 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2
	I0314 11:07:30.883102   12966 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:07:30.883135   12966 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:16:56:10:45:5e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/force-systemd-env-600000/disk.qcow2
	I0314 11:07:30.884893   12966 main.go:141] libmachine: STDOUT: 
	I0314 11:07:30.884909   12966 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:07:30.884922   12966 client.go:171] duration metric: took 274.449125ms to LocalClient.Create
	I0314 11:07:32.885435   12966 start.go:128] duration metric: took 2.335418417s to createHost
	I0314 11:07:32.885483   12966 start.go:83] releasing machines lock for "force-systemd-env-600000", held for 2.335815708s
	W0314 11:07:32.885691   12966 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-600000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:07:32.901068   12966 out.go:177] 
	W0314 11:07:32.906159   12966 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:07:32.906191   12966 out.go:239] * 
	* 
	W0314 11:07:32.908740   12966 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:07:32.918079   12966 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-600000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-600000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-600000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.57025ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-600000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-600000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-600000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-14 11:07:33.016075 -0700 PDT m=+735.528184376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-600000 -n force-systemd-env-600000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-600000 -n force-systemd-env-600000: exit status 7 (35.545541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-600000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-600000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-600000
--- FAIL: TestForceSystemdEnv (10.40s)

                                                
                                    
x
+
TestErrorSpam/setup (9.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-967000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-967000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 --driver=qemu2 : exit status 80 (9.97633275s)

                                                
                                                
-- stdout --
	* [nospam-967000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-967000" primary control-plane node in "nospam-967000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-967000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-967000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-967000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-967000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18384
- KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-967000" primary control-plane node in "nospam-967000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-967000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-967000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.98s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-780000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-780000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.882173708s)

                                                
                                                
-- stdout --
	* [functional-780000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-780000" primary control-plane node in "functional-780000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-780000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51926 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51926 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51926 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-780000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-780000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18384
- KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-780000" primary control-plane node in "functional-780000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-780000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51926 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51926 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51926 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-780000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (72.954583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.96s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-780000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-780000 --alsologtostderr -v=8: exit status 80 (5.190771959s)

                                                
                                                
-- stdout --
	* [functional-780000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-780000" primary control-plane node in "functional-780000" cluster
	* Restarting existing qemu2 VM for "functional-780000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-780000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 10:57:28.109206   11555 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:57:28.109328   11555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:57:28.109331   11555 out.go:304] Setting ErrFile to fd 2...
	I0314 10:57:28.109334   11555 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:57:28.109461   11555 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 10:57:28.110393   11555 out.go:298] Setting JSON to false
	I0314 10:57:28.126317   11555 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7020,"bootTime":1710432028,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 10:57:28.126381   11555 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 10:57:28.131964   11555 out.go:177] * [functional-780000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 10:57:28.138914   11555 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 10:57:28.142945   11555 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 10:57:28.139031   11555 notify.go:220] Checking for updates...
	I0314 10:57:28.149869   11555 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 10:57:28.152914   11555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 10:57:28.155847   11555 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 10:57:28.158904   11555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 10:57:28.162229   11555 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 10:57:28.162292   11555 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 10:57:28.166873   11555 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 10:57:28.173900   11555 start.go:297] selected driver: qemu2
	I0314 10:57:28.173905   11555 start.go:901] validating driver "qemu2" against &{Name:functional-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 10:57:28.173975   11555 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 10:57:28.176220   11555 cni.go:84] Creating CNI manager for ""
	I0314 10:57:28.176236   11555 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 10:57:28.176278   11555 start.go:340] cluster config:
	{Name:functional-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-780000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 10:57:28.180600   11555 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 10:57:28.188859   11555 out.go:177] * Starting "functional-780000" primary control-plane node in "functional-780000" cluster
	I0314 10:57:28.192843   11555 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 10:57:28.192862   11555 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 10:57:28.192874   11555 cache.go:56] Caching tarball of preloaded images
	I0314 10:57:28.192930   11555 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 10:57:28.192936   11555 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 10:57:28.192990   11555 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/functional-780000/config.json ...
	I0314 10:57:28.193382   11555 start.go:360] acquireMachinesLock for functional-780000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 10:57:28.193408   11555 start.go:364] duration metric: took 19.625µs to acquireMachinesLock for "functional-780000"
	I0314 10:57:28.193416   11555 start.go:96] Skipping create...Using existing machine configuration
	I0314 10:57:28.193420   11555 fix.go:54] fixHost starting: 
	I0314 10:57:28.193556   11555 fix.go:112] recreateIfNeeded on functional-780000: state=Stopped err=<nil>
	W0314 10:57:28.193564   11555 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 10:57:28.200873   11555 out.go:177] * Restarting existing qemu2 VM for "functional-780000" ...
	I0314 10:57:28.204864   11555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b0:7a:77:56:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/disk.qcow2
	I0314 10:57:28.206864   11555 main.go:141] libmachine: STDOUT: 
	I0314 10:57:28.206886   11555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 10:57:28.206912   11555 fix.go:56] duration metric: took 13.490417ms for fixHost
	I0314 10:57:28.206915   11555 start.go:83] releasing machines lock for "functional-780000", held for 13.5045ms
	W0314 10:57:28.206921   11555 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 10:57:28.206958   11555 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 10:57:28.206963   11555 start.go:728] Will try again in 5 seconds ...
	I0314 10:57:33.208583   11555 start.go:360] acquireMachinesLock for functional-780000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 10:57:33.208871   11555 start.go:364] duration metric: took 169.334µs to acquireMachinesLock for "functional-780000"
	I0314 10:57:33.208968   11555 start.go:96] Skipping create...Using existing machine configuration
	I0314 10:57:33.208985   11555 fix.go:54] fixHost starting: 
	I0314 10:57:33.209468   11555 fix.go:112] recreateIfNeeded on functional-780000: state=Stopped err=<nil>
	W0314 10:57:33.209483   11555 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 10:57:33.213010   11555 out.go:177] * Restarting existing qemu2 VM for "functional-780000" ...
	I0314 10:57:33.220856   11555 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b0:7a:77:56:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/disk.qcow2
	I0314 10:57:33.229512   11555 main.go:141] libmachine: STDOUT: 
	I0314 10:57:33.229579   11555 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 10:57:33.229680   11555 fix.go:56] duration metric: took 20.691667ms for fixHost
	I0314 10:57:33.229697   11555 start.go:83] releasing machines lock for "functional-780000", held for 20.804083ms
	W0314 10:57:33.229881   11555 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-780000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-780000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 10:57:33.237854   11555 out.go:177] 
	W0314 10:57:33.241928   11555 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 10:57:33.241958   11555 out.go:239] * 
	* 
	W0314 10:57:33.244429   11555 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 10:57:33.253768   11555 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-780000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.192291209s for "functional-780000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (68.212917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.971792ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-780000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (32.797417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-780000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-780000 get po -A: exit status 1 (25.730291ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-780000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-780000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-780000\n"*: args "kubectl --context functional-780000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-780000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (32.536041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh sudo crictl images: exit status 83 (45.724458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-780000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (41.914333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-780000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (44.820292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.834333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-780000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 kubectl -- --context functional-780000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 kubectl -- --context functional-780000 get pods: exit status 1 (520.721ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-780000
	* no server found for cluster "functional-780000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-780000 kubectl -- --context functional-780000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (34.238875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-780000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-780000 get pods: exit status 1 (684.920125ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-780000
	* no server found for cluster "functional-780000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-780000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (32.038541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-780000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-780000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.184895417s)

                                                
                                                
-- stdout --
	* [functional-780000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-780000" primary control-plane node in "functional-780000" cluster
	* Restarting existing qemu2 VM for "functional-780000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-780000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-780000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-780000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.185421417s for "functional-780000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (72.044208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-780000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-780000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.179041ms)

                                                
                                                
** stderr ** 
	error: context "functional-780000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-780000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (32.735792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 logs: exit status 83 (81.73125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT |                     |
	|         | -p download-only-659000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT | 14 Mar 24 10:55 PDT |
	| delete  | -p download-only-659000                                                  | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT | 14 Mar 24 10:55 PDT |
	| start   | -o=json --download-only                                                  | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT |                     |
	|         | -p download-only-905000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
	| delete  | -p download-only-905000                                                  | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
	| start   | -o=json --download-only                                                  | download-only-045000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
	|         | -p download-only-045000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
	| delete  | -p download-only-045000                                                  | download-only-045000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
	| delete  | -p download-only-659000                                                  | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
	| delete  | -p download-only-905000                                                  | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
	| delete  | -p download-only-045000                                                  | download-only-045000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
	| start   | --download-only -p                                                       | binary-mirror-003000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
	|         | binary-mirror-003000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51894                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-003000                                                  | binary-mirror-003000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
	| addons  | enable dashboard -p                                                      | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
	|         | addons-532000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
	|         | addons-532000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-532000 --wait=true                                             | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-532000                                                         | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
	| start   | -p nospam-967000 -n=1 --memory=2250 --wait=false                         | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-967000                                                         | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	| start   | -p functional-780000                                                     | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-780000                                                     | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	|         | minikube-local-cache-test:functional-780000                              |                      |         |         |                     |                     |
	| cache   | functional-780000 cache delete                                           | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	|         | minikube-local-cache-test:functional-780000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	| ssh     | functional-780000 ssh sudo                                               | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-780000                                                        | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-780000 ssh                                                    | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-780000 cache reload                                           | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	| ssh     | functional-780000 ssh                                                    | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-780000 kubectl --                                             | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | --context functional-780000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-780000                                                     | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 10:57:42
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 10:57:42.362931   11635 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:57:42.363047   11635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:57:42.363048   11635 out.go:304] Setting ErrFile to fd 2...
	I0314 10:57:42.363050   11635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:57:42.363171   11635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 10:57:42.364128   11635 out.go:298] Setting JSON to false
	I0314 10:57:42.380246   11635 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7034,"bootTime":1710432028,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 10:57:42.380308   11635 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 10:57:42.385393   11635 out.go:177] * [functional-780000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 10:57:42.392206   11635 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 10:57:42.392253   11635 notify.go:220] Checking for updates...
	I0314 10:57:42.396318   11635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 10:57:42.400162   11635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 10:57:42.403206   11635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 10:57:42.406244   11635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 10:57:42.409216   11635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 10:57:42.412562   11635 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 10:57:42.412615   11635 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 10:57:42.417223   11635 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 10:57:42.424172   11635 start.go:297] selected driver: qemu2
	I0314 10:57:42.424176   11635 start.go:901] validating driver "qemu2" against &{Name:functional-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 10:57:42.424217   11635 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 10:57:42.426490   11635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 10:57:42.426540   11635 cni.go:84] Creating CNI manager for ""
	I0314 10:57:42.426547   11635 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 10:57:42.426585   11635 start.go:340] cluster config:
	{Name:functional-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-780000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 10:57:42.430935   11635 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 10:57:42.438192   11635 out.go:177] * Starting "functional-780000" primary control-plane node in "functional-780000" cluster
	I0314 10:57:42.442201   11635 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 10:57:42.442214   11635 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 10:57:42.442227   11635 cache.go:56] Caching tarball of preloaded images
	I0314 10:57:42.442288   11635 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 10:57:42.442293   11635 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 10:57:42.442376   11635 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/functional-780000/config.json ...
	I0314 10:57:42.442899   11635 start.go:360] acquireMachinesLock for functional-780000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 10:57:42.442931   11635 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "functional-780000"
	I0314 10:57:42.442940   11635 start.go:96] Skipping create...Using existing machine configuration
	I0314 10:57:42.442945   11635 fix.go:54] fixHost starting: 
	I0314 10:57:42.443071   11635 fix.go:112] recreateIfNeeded on functional-780000: state=Stopped err=<nil>
	W0314 10:57:42.443078   11635 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 10:57:42.451218   11635 out.go:177] * Restarting existing qemu2 VM for "functional-780000" ...
	I0314 10:57:42.454269   11635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b0:7a:77:56:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/disk.qcow2
	I0314 10:57:42.456330   11635 main.go:141] libmachine: STDOUT: 
	I0314 10:57:42.456348   11635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 10:57:42.456379   11635 fix.go:56] duration metric: took 13.434958ms for fixHost
	I0314 10:57:42.456383   11635 start.go:83] releasing machines lock for "functional-780000", held for 13.449416ms
	W0314 10:57:42.456389   11635 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 10:57:42.456423   11635 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 10:57:42.456428   11635 start.go:728] Will try again in 5 seconds ...
	I0314 10:57:47.458541   11635 start.go:360] acquireMachinesLock for functional-780000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 10:57:47.458835   11635 start.go:364] duration metric: took 245.625µs to acquireMachinesLock for "functional-780000"
	I0314 10:57:47.458958   11635 start.go:96] Skipping create...Using existing machine configuration
	I0314 10:57:47.458972   11635 fix.go:54] fixHost starting: 
	I0314 10:57:47.459760   11635 fix.go:112] recreateIfNeeded on functional-780000: state=Stopped err=<nil>
	W0314 10:57:47.459779   11635 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 10:57:47.469210   11635 out.go:177] * Restarting existing qemu2 VM for "functional-780000" ...
	I0314 10:57:47.473359   11635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b0:7a:77:56:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/disk.qcow2
	I0314 10:57:47.482755   11635 main.go:141] libmachine: STDOUT: 
	I0314 10:57:47.482815   11635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 10:57:47.482883   11635 fix.go:56] duration metric: took 23.913375ms for fixHost
	I0314 10:57:47.482894   11635 start.go:83] releasing machines lock for "functional-780000", held for 24.04825ms
	W0314 10:57:47.483069   11635 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-780000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 10:57:47.489118   11635 out.go:177] 
	W0314 10:57:47.493179   11635 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 10:57:47.493205   11635 out.go:239] * 
	W0314 10:57:47.495977   11635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 10:57:47.503182   11635 out.go:177] 
	
	
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-780000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT |                     |
|         | -p download-only-659000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT | 14 Mar 24 10:55 PDT |
| delete  | -p download-only-659000                                                  | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT | 14 Mar 24 10:55 PDT |
| start   | -o=json --download-only                                                  | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT |                     |
|         | -p download-only-905000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| delete  | -p download-only-905000                                                  | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| start   | -o=json --download-only                                                  | download-only-045000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | -p download-only-045000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| delete  | -p download-only-045000                                                  | download-only-045000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| delete  | -p download-only-659000                                                  | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| delete  | -p download-only-905000                                                  | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| delete  | -p download-only-045000                                                  | download-only-045000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| start   | --download-only -p                                                       | binary-mirror-003000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | binary-mirror-003000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51894                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-003000                                                  | binary-mirror-003000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| addons  | enable dashboard -p                                                      | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | addons-532000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | addons-532000                                                            |                      |         |         |                     |                     |
| start   | -p addons-532000 --wait=true                                             | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-532000                                                         | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| start   | -p nospam-967000 -n=1 --memory=2250 --wait=false                         | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-967000                                                         | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
| start   | -p functional-780000                                                     | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-780000                                                     | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | minikube-local-cache-test:functional-780000                              |                      |         |         |                     |                     |
| cache   | functional-780000 cache delete                                           | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | minikube-local-cache-test:functional-780000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
| ssh     | functional-780000 ssh sudo                                               | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-780000                                                        | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-780000 ssh                                                    | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-780000 cache reload                                           | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
| ssh     | functional-780000 ssh                                                    | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-780000 kubectl --                                             | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | --context functional-780000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-780000                                                     | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/14 10:57:42
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0314 10:57:42.362931   11635 out.go:291] Setting OutFile to fd 1 ...
I0314 10:57:42.363047   11635 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:57:42.363048   11635 out.go:304] Setting ErrFile to fd 2...
I0314 10:57:42.363050   11635 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:57:42.363171   11635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
I0314 10:57:42.364128   11635 out.go:298] Setting JSON to false
I0314 10:57:42.380246   11635 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7034,"bootTime":1710432028,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0314 10:57:42.380308   11635 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0314 10:57:42.385393   11635 out.go:177] * [functional-780000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0314 10:57:42.392206   11635 out.go:177]   - MINIKUBE_LOCATION=18384
I0314 10:57:42.392253   11635 notify.go:220] Checking for updates...
I0314 10:57:42.396318   11635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
I0314 10:57:42.400162   11635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0314 10:57:42.403206   11635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0314 10:57:42.406244   11635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
I0314 10:57:42.409216   11635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0314 10:57:42.412562   11635 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 10:57:42.412615   11635 driver.go:392] Setting default libvirt URI to qemu:///system
I0314 10:57:42.417223   11635 out.go:177] * Using the qemu2 driver based on existing profile
I0314 10:57:42.424172   11635 start.go:297] selected driver: qemu2
I0314 10:57:42.424176   11635 start.go:901] validating driver "qemu2" against &{Name:functional-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:functional-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0314 10:57:42.424217   11635 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0314 10:57:42.426490   11635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0314 10:57:42.426540   11635 cni.go:84] Creating CNI manager for ""
I0314 10:57:42.426547   11635 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0314 10:57:42.426585   11635 start.go:340] cluster config:
{Name:functional-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-780000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0314 10:57:42.430935   11635 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0314 10:57:42.438192   11635 out.go:177] * Starting "functional-780000" primary control-plane node in "functional-780000" cluster
I0314 10:57:42.442201   11635 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0314 10:57:42.442214   11635 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0314 10:57:42.442227   11635 cache.go:56] Caching tarball of preloaded images
I0314 10:57:42.442288   11635 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0314 10:57:42.442293   11635 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0314 10:57:42.442376   11635 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/functional-780000/config.json ...
I0314 10:57:42.442899   11635 start.go:360] acquireMachinesLock for functional-780000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0314 10:57:42.442931   11635 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "functional-780000"
I0314 10:57:42.442940   11635 start.go:96] Skipping create...Using existing machine configuration
I0314 10:57:42.442945   11635 fix.go:54] fixHost starting: 
I0314 10:57:42.443071   11635 fix.go:112] recreateIfNeeded on functional-780000: state=Stopped err=<nil>
W0314 10:57:42.443078   11635 fix.go:138] unexpected machine state, will restart: <nil>
I0314 10:57:42.451218   11635 out.go:177] * Restarting existing qemu2 VM for "functional-780000" ...
I0314 10:57:42.454269   11635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b0:7a:77:56:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/disk.qcow2
I0314 10:57:42.456330   11635 main.go:141] libmachine: STDOUT: 
I0314 10:57:42.456348   11635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0314 10:57:42.456379   11635 fix.go:56] duration metric: took 13.434958ms for fixHost
I0314 10:57:42.456383   11635 start.go:83] releasing machines lock for "functional-780000", held for 13.449416ms
W0314 10:57:42.456389   11635 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0314 10:57:42.456423   11635 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0314 10:57:42.456428   11635 start.go:728] Will try again in 5 seconds ...
I0314 10:57:47.458541   11635 start.go:360] acquireMachinesLock for functional-780000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0314 10:57:47.458835   11635 start.go:364] duration metric: took 245.625µs to acquireMachinesLock for "functional-780000"
I0314 10:57:47.458958   11635 start.go:96] Skipping create...Using existing machine configuration
I0314 10:57:47.458972   11635 fix.go:54] fixHost starting: 
I0314 10:57:47.459760   11635 fix.go:112] recreateIfNeeded on functional-780000: state=Stopped err=<nil>
W0314 10:57:47.459779   11635 fix.go:138] unexpected machine state, will restart: <nil>
I0314 10:57:47.469210   11635 out.go:177] * Restarting existing qemu2 VM for "functional-780000" ...
I0314 10:57:47.473359   11635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b0:7a:77:56:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/disk.qcow2
I0314 10:57:47.482755   11635 main.go:141] libmachine: STDOUT: 
I0314 10:57:47.482815   11635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0314 10:57:47.482883   11635 fix.go:56] duration metric: took 23.913375ms for fixHost
I0314 10:57:47.482894   11635 start.go:83] releasing machines lock for "functional-780000", held for 24.04825ms
W0314 10:57:47.483069   11635 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-780000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0314 10:57:47.489118   11635 out.go:177] 
W0314 10:57:47.493179   11635 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0314 10:57:47.493205   11635 out.go:239] * 
W0314 10:57:47.495977   11635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0314 10:57:47.503182   11635 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2262025117/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT |                     |
|         | -p download-only-659000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT | 14 Mar 24 10:55 PDT |
| delete  | -p download-only-659000                                                  | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT | 14 Mar 24 10:55 PDT |
| start   | -o=json --download-only                                                  | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT |                     |
|         | -p download-only-905000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| delete  | -p download-only-905000                                                  | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| start   | -o=json --download-only                                                  | download-only-045000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | -p download-only-045000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| delete  | -p download-only-045000                                                  | download-only-045000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| delete  | -p download-only-659000                                                  | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| delete  | -p download-only-905000                                                  | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| delete  | -p download-only-045000                                                  | download-only-045000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| start   | --download-only -p                                                       | binary-mirror-003000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | binary-mirror-003000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51894                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-003000                                                  | binary-mirror-003000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| addons  | enable dashboard -p                                                      | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | addons-532000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | addons-532000                                                            |                      |         |         |                     |                     |
| start   | -p addons-532000 --wait=true                                             | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-532000                                                         | addons-532000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
| start   | -p nospam-967000 -n=1 --memory=2250 --wait=false                         | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-967000 --log_dir                                                  | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-967000                                                         | nospam-967000        | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
| start   | -p functional-780000                                                     | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-780000                                                     | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-780000 cache add                                              | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | minikube-local-cache-test:functional-780000                              |                      |         |         |                     |                     |
| cache   | functional-780000 cache delete                                           | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | minikube-local-cache-test:functional-780000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
| ssh     | functional-780000 ssh sudo                                               | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-780000                                                        | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-780000 ssh                                                    | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-780000 cache reload                                           | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
| ssh     | functional-780000 ssh                                                    | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT | 14 Mar 24 10:57 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-780000 kubectl --                                             | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | --context functional-780000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-780000                                                     | functional-780000    | jenkins | v1.32.0 | 14 Mar 24 10:57 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/14 10:57:42
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0314 10:57:42.362931   11635 out.go:291] Setting OutFile to fd 1 ...
I0314 10:57:42.363047   11635 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:57:42.363048   11635 out.go:304] Setting ErrFile to fd 2...
I0314 10:57:42.363050   11635 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:57:42.363171   11635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
I0314 10:57:42.364128   11635 out.go:298] Setting JSON to false
I0314 10:57:42.380246   11635 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7034,"bootTime":1710432028,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W0314 10:57:42.380308   11635 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0314 10:57:42.385393   11635 out.go:177] * [functional-780000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0314 10:57:42.392206   11635 out.go:177]   - MINIKUBE_LOCATION=18384
I0314 10:57:42.392253   11635 notify.go:220] Checking for updates...
I0314 10:57:42.396318   11635 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
I0314 10:57:42.400162   11635 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0314 10:57:42.403206   11635 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0314 10:57:42.406244   11635 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
I0314 10:57:42.409216   11635 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0314 10:57:42.412562   11635 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 10:57:42.412615   11635 driver.go:392] Setting default libvirt URI to qemu:///system
I0314 10:57:42.417223   11635 out.go:177] * Using the qemu2 driver based on existing profile
I0314 10:57:42.424172   11635 start.go:297] selected driver: qemu2
I0314 10:57:42.424176   11635 start.go:901] validating driver "qemu2" against &{Name:functional-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:functional-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0314 10:57:42.424217   11635 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0314 10:57:42.426490   11635 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0314 10:57:42.426540   11635 cni.go:84] Creating CNI manager for ""
I0314 10:57:42.426547   11635 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0314 10:57:42.426585   11635 start.go:340] cluster config:
{Name:functional-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-780000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0314 10:57:42.430935   11635 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0314 10:57:42.438192   11635 out.go:177] * Starting "functional-780000" primary control-plane node in "functional-780000" cluster
I0314 10:57:42.442201   11635 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0314 10:57:42.442214   11635 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0314 10:57:42.442227   11635 cache.go:56] Caching tarball of preloaded images
I0314 10:57:42.442288   11635 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0314 10:57:42.442293   11635 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0314 10:57:42.442376   11635 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/functional-780000/config.json ...
I0314 10:57:42.442899   11635 start.go:360] acquireMachinesLock for functional-780000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0314 10:57:42.442931   11635 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "functional-780000"
I0314 10:57:42.442940   11635 start.go:96] Skipping create...Using existing machine configuration
I0314 10:57:42.442945   11635 fix.go:54] fixHost starting: 
I0314 10:57:42.443071   11635 fix.go:112] recreateIfNeeded on functional-780000: state=Stopped err=<nil>
W0314 10:57:42.443078   11635 fix.go:138] unexpected machine state, will restart: <nil>
I0314 10:57:42.451218   11635 out.go:177] * Restarting existing qemu2 VM for "functional-780000" ...
I0314 10:57:42.454269   11635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b0:7a:77:56:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/disk.qcow2
I0314 10:57:42.456330   11635 main.go:141] libmachine: STDOUT: 
I0314 10:57:42.456348   11635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0314 10:57:42.456379   11635 fix.go:56] duration metric: took 13.434958ms for fixHost
I0314 10:57:42.456383   11635 start.go:83] releasing machines lock for "functional-780000", held for 13.449416ms
W0314 10:57:42.456389   11635 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0314 10:57:42.456423   11635 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0314 10:57:42.456428   11635 start.go:728] Will try again in 5 seconds ...
I0314 10:57:47.458541   11635 start.go:360] acquireMachinesLock for functional-780000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0314 10:57:47.458835   11635 start.go:364] duration metric: took 245.625µs to acquireMachinesLock for "functional-780000"
I0314 10:57:47.458958   11635 start.go:96] Skipping create...Using existing machine configuration
I0314 10:57:47.458972   11635 fix.go:54] fixHost starting: 
I0314 10:57:47.459760   11635 fix.go:112] recreateIfNeeded on functional-780000: state=Stopped err=<nil>
W0314 10:57:47.459779   11635 fix.go:138] unexpected machine state, will restart: <nil>
I0314 10:57:47.469210   11635 out.go:177] * Restarting existing qemu2 VM for "functional-780000" ...
I0314 10:57:47.473359   11635 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b0:7a:77:56:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/functional-780000/disk.qcow2
I0314 10:57:47.482755   11635 main.go:141] libmachine: STDOUT: 
I0314 10:57:47.482815   11635 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0314 10:57:47.482883   11635 fix.go:56] duration metric: took 23.913375ms for fixHost
I0314 10:57:47.482894   11635 start.go:83] releasing machines lock for "functional-780000", held for 24.04825ms
W0314 10:57:47.483069   11635 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-780000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0314 10:57:47.489118   11635 out.go:177] 
W0314 10:57:47.493179   11635 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0314 10:57:47.493205   11635 out.go:239] * 
W0314 10:57:47.495977   11635 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0314 10:57:47.503182   11635 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-780000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-780000 apply -f testdata/invalidsvc.yaml: exit status 1 (28.442375ms)

                                                
                                                
** stderr ** 
	error: context "functional-780000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-780000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-780000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-780000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-780000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-780000 --alsologtostderr -v=1] stderr:
I0314 10:58:41.559092   11967 out.go:291] Setting OutFile to fd 1 ...
I0314 10:58:41.559501   11967 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:41.559505   11967 out.go:304] Setting ErrFile to fd 2...
I0314 10:58:41.559507   11967 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:41.559665   11967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
I0314 10:58:41.559958   11967 mustload.go:65] Loading cluster: functional-780000
I0314 10:58:41.560143   11967 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 10:58:41.563548   11967 out.go:177] * The control-plane node functional-780000 host is not running: state=Stopped
I0314 10:58:41.567336   11967 out.go:177]   To start a cluster, run: "minikube start -p functional-780000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (43.116417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 status: exit status 7 (32.696375ms)

                                                
                                                
-- stdout --
	functional-780000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-780000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.130583ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-780000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 status -o json: exit status 7 (31.224584ms)

                                                
                                                
-- stdout --
	{"Name":"functional-780000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-780000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (32.435833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-780000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-780000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.774042ms)

                                                
                                                
** stderr ** 
	error: context "functional-780000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-780000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-780000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-780000 describe po hello-node-connect: exit status 1 (25.985625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-780000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-780000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-780000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-780000 logs -l app=hello-node-connect: exit status 1 (27.037167ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-780000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-780000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-780000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-780000 describe svc hello-node-connect: exit status 1 (26.77275ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-780000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-780000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (32.252333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-780000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (32.73ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "echo hello": exit status 83 (42.186458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-780000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-780000\"\n"*. args "out/minikube-darwin-arm64 -p functional-780000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "cat /etc/hostname": exit status 83 (48.02875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-780000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-780000"- but got *"* The control-plane node functional-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-780000\"\n"*. args "out/minikube-darwin-arm64 -p functional-780000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (32.494916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (57.628958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-780000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh -n functional-780000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh -n functional-780000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.954416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-780000 ssh -n functional-780000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-780000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-780000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 cp functional-780000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3538691241/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 cp functional-780000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3538691241/001/cp-test.txt: exit status 83 (43.328875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-780000 cp functional-780000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3538691241/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh -n functional-780000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh -n functional-780000 "sudo cat /home/docker/cp-test.txt": exit status 83 (41.998542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-780000 ssh -n functional-780000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3538691241/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-780000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (46.801584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-780000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh -n functional-780000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh -n functional-780000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (47.865375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-780000 ssh -n functional-780000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-780000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-780000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/11238/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /etc/test/nested/copy/11238/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /etc/test/nested/copy/11238/hosts": exit status 83 (41.697917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /etc/test/nested/copy/11238/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-780000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-780000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (33.967583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/11238.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /etc/ssl/certs/11238.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /etc/ssl/certs/11238.pem": exit status 83 (47.940791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/11238.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-780000 ssh \"sudo cat /etc/ssl/certs/11238.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/11238.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-780000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-780000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/11238.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /usr/share/ca-certificates/11238.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /usr/share/ca-certificates/11238.pem": exit status 83 (42.689833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/11238.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-780000 ssh \"sudo cat /usr/share/ca-certificates/11238.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/11238.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-780000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-780000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (49.719792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-780000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-780000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-780000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/112382.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /etc/ssl/certs/112382.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /etc/ssl/certs/112382.pem": exit status 83 (42.698667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/112382.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-780000 ssh \"sudo cat /etc/ssl/certs/112382.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/112382.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-780000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-780000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/112382.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /usr/share/ca-certificates/112382.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /usr/share/ca-certificates/112382.pem": exit status 83 (42.631084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/112382.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-780000 ssh \"sudo cat /usr/share/ca-certificates/112382.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/112382.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-780000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-780000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (43.864166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-780000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-780000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-780000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (32.792834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-780000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-780000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.948458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-780000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-780000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-780000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-780000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-780000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-780000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-780000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-780000 -n functional-780000: exit status 7 (32.333583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-780000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "sudo systemctl is-active crio": exit status 83 (38.380459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-780000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 version -o=json --components: exit status 83 (43.903375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-780000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-780000 image ls --format short --alsologtostderr:
I0314 10:58:41.977848   11982 out.go:291] Setting OutFile to fd 1 ...
I0314 10:58:41.977986   11982 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:41.977989   11982 out.go:304] Setting ErrFile to fd 2...
I0314 10:58:41.977992   11982 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:41.978113   11982 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
I0314 10:58:41.978545   11982 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 10:58:41.978602   11982 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-780000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-780000 image ls --format table --alsologtostderr:
I0314 10:58:42.210893   11994 out.go:291] Setting OutFile to fd 1 ...
I0314 10:58:42.211046   11994 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:42.211050   11994 out.go:304] Setting ErrFile to fd 2...
I0314 10:58:42.211052   11994 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:42.211185   11994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
I0314 10:58:42.211674   11994 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 10:58:42.211730   11994 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-780000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-780000 image ls --format json --alsologtostderr:
I0314 10:58:42.173105   11992 out.go:291] Setting OutFile to fd 1 ...
I0314 10:58:42.173246   11992 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:42.173249   11992 out.go:304] Setting ErrFile to fd 2...
I0314 10:58:42.173251   11992 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:42.173381   11992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
I0314 10:58:42.173782   11992 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 10:58:42.173838   11992 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-780000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-780000 image ls --format yaml --alsologtostderr:
I0314 10:58:42.016629   11984 out.go:291] Setting OutFile to fd 1 ...
I0314 10:58:42.016797   11984 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:42.016800   11984 out.go:304] Setting ErrFile to fd 2...
I0314 10:58:42.016803   11984 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:42.016933   11984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
I0314 10:58:42.017390   11984 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 10:58:42.017450   11984 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh pgrep buildkitd: exit status 83 (42.858541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image build -t localhost/my-image:functional-780000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-780000 image build -t localhost/my-image:functional-780000 testdata/build --alsologtostderr:
I0314 10:58:42.097337   11988 out.go:291] Setting OutFile to fd 1 ...
I0314 10:58:42.097879   11988 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:42.097883   11988 out.go:304] Setting ErrFile to fd 2...
I0314 10:58:42.097885   11988 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:58:42.098051   11988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
I0314 10:58:42.098465   11988 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 10:58:42.098900   11988 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 10:58:42.099125   11988 build_images.go:133] succeeded building to: 
I0314 10:58:42.099129   11988 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image ls
functional_test.go:442: expected "localhost/my-image:functional-780000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-780000 docker-env) && out/minikube-darwin-arm64 status -p functional-780000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-780000 docker-env) && out/minikube-darwin-arm64 status -p functional-780000": exit status 1 (51.033708ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 update-context --alsologtostderr -v=2: exit status 83 (45.673875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 10:58:41.843525   11976 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:58:41.844377   11976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:58:41.844381   11976 out.go:304] Setting ErrFile to fd 2...
	I0314 10:58:41.844383   11976 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:58:41.844545   11976 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 10:58:41.844758   11976 mustload.go:65] Loading cluster: functional-780000
	I0314 10:58:41.844945   11976 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 10:58:41.849578   11976 out.go:177] * The control-plane node functional-780000 host is not running: state=Stopped
	I0314 10:58:41.853579   11976 out.go:177]   To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-780000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-780000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 update-context --alsologtostderr -v=2: exit status 83 (43.543208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 10:58:41.934066   11980 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:58:41.934211   11980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:58:41.934214   11980 out.go:304] Setting ErrFile to fd 2...
	I0314 10:58:41.934217   11980 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:58:41.934336   11980 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 10:58:41.934534   11980 mustload.go:65] Loading cluster: functional-780000
	I0314 10:58:41.934720   11980 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 10:58:41.939578   11980 out.go:177] * The control-plane node functional-780000 host is not running: state=Stopped
	I0314 10:58:41.943532   11980 out.go:177]   To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-780000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-780000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 update-context --alsologtostderr -v=2: exit status 83 (44.725209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 10:58:41.889746   11978 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:58:41.889911   11978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:58:41.889914   11978 out.go:304] Setting ErrFile to fd 2...
	I0314 10:58:41.889917   11978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:58:41.890035   11978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 10:58:41.890268   11978 mustload.go:65] Loading cluster: functional-780000
	I0314 10:58:41.890461   11978 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 10:58:41.894599   11978 out.go:177] * The control-plane node functional-780000 host is not running: state=Stopped
	I0314 10:58:41.898480   11978 out.go:177]   To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-780000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-780000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-780000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-780000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.781333ms)

                                                
                                                
** stderr ** 
	error: context "functional-780000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-780000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 service list: exit status 83 (45.660208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-780000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-780000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 service list -o json: exit status 83 (44.829792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-780000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 service --namespace=default --https --url hello-node: exit status 83 (44.946708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-780000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 service hello-node --url --format={{.IP}}: exit status 83 (43.778208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-780000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-780000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 service hello-node --url: exit status 83 (44.76875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-780000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test.go:1565: failed to parse "* The control-plane node functional-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-780000\"": parse "* The control-plane node functional-780000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-780000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-780000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-780000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0314 10:57:50.558116   11753 out.go:291] Setting OutFile to fd 1 ...
I0314 10:57:50.558261   11753 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:57:50.558264   11753 out.go:304] Setting ErrFile to fd 2...
I0314 10:57:50.558267   11753 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 10:57:50.558396   11753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
I0314 10:57:50.558648   11753 mustload.go:65] Loading cluster: functional-780000
I0314 10:57:50.558841   11753 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 10:57:50.563197   11753 out.go:177] * The control-plane node functional-780000 host is not running: state=Stopped
I0314 10:57:50.574198   11753 out.go:177]   To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
stdout: * The control-plane node functional-780000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-780000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-780000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 11754: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-780000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-780000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-780000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-780000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-780000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-780000": client config: context "functional-780000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (104.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-780000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-780000 get svc nginx-svc: exit status 1 (66.678041ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-780000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-780000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (104.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image load --daemon gcr.io/google-containers/addon-resizer:functional-780000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-780000 image load --daemon gcr.io/google-containers/addon-resizer:functional-780000 --alsologtostderr: (1.338896s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-780000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image load --daemon gcr.io/google-containers/addon-resizer:functional-780000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-780000 image load --daemon gcr.io/google-containers/addon-resizer:functional-780000 --alsologtostderr: (1.310549125s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-780000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.247778542s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-780000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image load --daemon gcr.io/google-containers/addon-resizer:functional-780000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-780000 image load --daemon gcr.io/google-containers/addon-resizer:functional-780000 --alsologtostderr: (1.174412333s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-780000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image save gcr.io/google-containers/addon-resizer:functional-780000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-780000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.035319666s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (38.94s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (10.2s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-068000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-068000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (10.124051125s)

                                                
                                                
-- stdout --
	* [ha-068000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-068000" primary control-plane node in "ha-068000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-068000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:00:39.371774   12062 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:00:39.371909   12062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:00:39.371913   12062 out.go:304] Setting ErrFile to fd 2...
	I0314 11:00:39.371915   12062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:00:39.372031   12062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:00:39.373121   12062 out.go:298] Setting JSON to false
	I0314 11:00:39.389550   12062 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7211,"bootTime":1710432028,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:00:39.389635   12062 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:00:39.395155   12062 out.go:177] * [ha-068000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:00:39.404057   12062 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:00:39.404097   12062 notify.go:220] Checking for updates...
	I0314 11:00:39.409012   12062 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:00:39.416050   12062 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:00:39.422975   12062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:00:39.427022   12062 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:00:39.429999   12062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:00:39.433183   12062 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:00:39.437020   12062 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:00:39.444062   12062 start.go:297] selected driver: qemu2
	I0314 11:00:39.444069   12062 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:00:39.444076   12062 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:00:39.446526   12062 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:00:39.450016   12062 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:00:39.453132   12062 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:00:39.453194   12062 cni.go:84] Creating CNI manager for ""
	I0314 11:00:39.453201   12062 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0314 11:00:39.453205   12062 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 11:00:39.453245   12062 start.go:340] cluster config:
	{Name:ha-068000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-068000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:00:39.458179   12062 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:00:39.464834   12062 out.go:177] * Starting "ha-068000" primary control-plane node in "ha-068000" cluster
	I0314 11:00:39.469003   12062 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:00:39.469020   12062 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:00:39.469034   12062 cache.go:56] Caching tarball of preloaded images
	I0314 11:00:39.469095   12062 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:00:39.469101   12062 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:00:39.469333   12062 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/ha-068000/config.json ...
	I0314 11:00:39.469347   12062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/ha-068000/config.json: {Name:mk0695afec979c616dd781c48aa2c508a830297c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:00:39.469660   12062 start.go:360] acquireMachinesLock for ha-068000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:00:39.469701   12062 start.go:364] duration metric: took 30.291µs to acquireMachinesLock for "ha-068000"
	I0314 11:00:39.469715   12062 start.go:93] Provisioning new machine with config: &{Name:ha-068000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.4 ClusterName:ha-068000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:00:39.469757   12062 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:00:39.477003   12062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:00:39.505392   12062 start.go:159] libmachine.API.Create for "ha-068000" (driver="qemu2")
	I0314 11:00:39.505422   12062 client.go:168] LocalClient.Create starting
	I0314 11:00:39.505491   12062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:00:39.505524   12062 main.go:141] libmachine: Decoding PEM data...
	I0314 11:00:39.505532   12062 main.go:141] libmachine: Parsing certificate...
	I0314 11:00:39.505581   12062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:00:39.505602   12062 main.go:141] libmachine: Decoding PEM data...
	I0314 11:00:39.505608   12062 main.go:141] libmachine: Parsing certificate...
	I0314 11:00:39.505952   12062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:00:39.666623   12062 main.go:141] libmachine: Creating SSH key...
	I0314 11:00:39.726689   12062 main.go:141] libmachine: Creating Disk image...
	I0314 11:00:39.726699   12062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:00:39.726886   12062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2
	I0314 11:00:39.739392   12062 main.go:141] libmachine: STDOUT: 
	I0314 11:00:39.739412   12062 main.go:141] libmachine: STDERR: 
	I0314 11:00:39.739470   12062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2 +20000M
	I0314 11:00:39.750399   12062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:00:39.750414   12062 main.go:141] libmachine: STDERR: 
	I0314 11:00:39.750427   12062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2
	I0314 11:00:39.750429   12062 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:00:39.750451   12062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:2a:77:6b:ee:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2
	I0314 11:00:39.752670   12062 main.go:141] libmachine: STDOUT: 
	I0314 11:00:39.752686   12062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:00:39.752702   12062 client.go:171] duration metric: took 247.279666ms to LocalClient.Create
	I0314 11:00:41.754710   12062 start.go:128] duration metric: took 2.284962042s to createHost
	I0314 11:00:41.754800   12062 start.go:83] releasing machines lock for "ha-068000", held for 2.285132709s
	W0314 11:00:41.754847   12062 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:00:41.767042   12062 out.go:177] * Deleting "ha-068000" in qemu2 ...
	W0314 11:00:41.800490   12062 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:00:41.800538   12062 start.go:728] Will try again in 5 seconds ...
	I0314 11:00:46.802121   12062 start.go:360] acquireMachinesLock for ha-068000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:00:46.802620   12062 start.go:364] duration metric: took 372.584µs to acquireMachinesLock for "ha-068000"
	I0314 11:00:46.802750   12062 start.go:93] Provisioning new machine with config: &{Name:ha-068000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.4 ClusterName:ha-068000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:00:46.803025   12062 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:00:46.812732   12062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:00:46.860126   12062 start.go:159] libmachine.API.Create for "ha-068000" (driver="qemu2")
	I0314 11:00:46.860188   12062 client.go:168] LocalClient.Create starting
	I0314 11:00:46.860307   12062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:00:46.860372   12062 main.go:141] libmachine: Decoding PEM data...
	I0314 11:00:46.860417   12062 main.go:141] libmachine: Parsing certificate...
	I0314 11:00:46.860484   12062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:00:46.860525   12062 main.go:141] libmachine: Decoding PEM data...
	I0314 11:00:46.860536   12062 main.go:141] libmachine: Parsing certificate...
	I0314 11:00:46.861058   12062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:00:47.015851   12062 main.go:141] libmachine: Creating SSH key...
	I0314 11:00:47.400152   12062 main.go:141] libmachine: Creating Disk image...
	I0314 11:00:47.400167   12062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:00:47.400397   12062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2
	I0314 11:00:47.413397   12062 main.go:141] libmachine: STDOUT: 
	I0314 11:00:47.413424   12062 main.go:141] libmachine: STDERR: 
	I0314 11:00:47.413473   12062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2 +20000M
	I0314 11:00:47.424347   12062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:00:47.424369   12062 main.go:141] libmachine: STDERR: 
	I0314 11:00:47.424382   12062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2
	I0314 11:00:47.424387   12062 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:00:47.424423   12062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:45:cc:23:8d:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2
	I0314 11:00:47.426146   12062 main.go:141] libmachine: STDOUT: 
	I0314 11:00:47.426171   12062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:00:47.426183   12062 client.go:171] duration metric: took 565.998708ms to LocalClient.Create
	I0314 11:00:49.427007   12062 start.go:128] duration metric: took 2.623976625s to createHost
	I0314 11:00:49.427083   12062 start.go:83] releasing machines lock for "ha-068000", held for 2.624492375s
	W0314 11:00:49.427360   12062 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-068000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-068000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:00:49.435463   12062 out.go:177] 
	W0314 11:00:49.440601   12062 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:00:49.440632   12062 out.go:239] * 
	* 
	W0314 11:00:49.442837   12062 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:00:49.452381   12062 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-068000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (69.748416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/StartCluster (10.20s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (78.9s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.24025ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-068000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- rollout status deployment/busybox: exit status 1 (58.523042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.932917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.819083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.95925ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.764791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.57175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.487834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.687ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.507458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.428459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.5795ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.388625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.144084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.420333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.171ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (32.443875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DeployApp (78.90s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-068000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.488125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-068000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (32.0215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-068000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-068000 -v=7 --alsologtostderr: exit status 83 (44.527333ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-068000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-068000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:08.554372   12152 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:08.554824   12152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:08.554828   12152 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:08.554830   12152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:08.554949   12152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:08.555153   12152 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:08.555341   12152 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:08.560262   12152 out.go:177] * The control-plane node ha-068000 host is not running: state=Stopped
	I0314 11:02:08.564271   12152 out.go:177]   To start a cluster, run: "minikube start -p ha-068000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-068000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (33.859584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-068000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-068000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.260708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-068000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-068000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-068000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (32.813208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-068000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-068000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-068000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-068000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-068000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-068000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-068000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-068000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (32.916042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status --output json -v=7 --alsologtostderr: exit status 7 (33.093292ms)

                                                
                                                
-- stdout --
	{"Name":"ha-068000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:08.802392   12165 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:08.802514   12165 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:08.802518   12165 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:08.802520   12165 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:08.802635   12165 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:08.802758   12165 out.go:298] Setting JSON to true
	I0314 11:02:08.802775   12165 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:08.802838   12165 notify.go:220] Checking for updates...
	I0314 11:02:08.802981   12165 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:08.802985   12165 status.go:255] checking status of ha-068000 ...
	I0314 11:02:08.803196   12165 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:08.803200   12165 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:08.803202   12165 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-068000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (31.814375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 node stop m02 -v=7 --alsologtostderr: exit status 85 (49.954667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:08.867582   12169 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:08.868044   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:08.868047   12169 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:08.868050   12169 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:08.868164   12169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:08.868387   12169 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:08.868567   12169 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:08.872130   12169 out.go:177] 
	W0314 11:02:08.876180   12169 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0314 11:02:08.876185   12169 out.go:239] * 
	* 
	W0314 11:02:08.878279   12169 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:02:08.882214   12169 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-068000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr: exit status 7 (32.358875ms)

                                                
                                                
-- stdout --
	ha-068000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:08.917860   12171 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:08.918007   12171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:08.918010   12171 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:08.918013   12171 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:08.918130   12171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:08.918245   12171 out.go:298] Setting JSON to false
	I0314 11:02:08.918254   12171 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:08.918301   12171 notify.go:220] Checking for updates...
	I0314 11:02:08.918442   12171 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:08.918447   12171 status.go:255] checking status of ha-068000 ...
	I0314 11:02:08.918641   12171 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:08.918645   12171 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:08.918648   12171 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr": ha-068000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr": ha-068000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr": ha-068000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr": ha-068000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (32.516375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-068000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-068000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-068000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-068000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (32.231667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (35.4s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.165333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:09.090496   12181 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:09.090878   12181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:09.090883   12181 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:09.090889   12181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:09.091020   12181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:09.091219   12181 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:09.091402   12181 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:09.095672   12181 out.go:177] 
	W0314 11:02:09.099696   12181 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0314 11:02:09.099700   12181 out.go:239] * 
	* 
	W0314 11:02:09.101695   12181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:02:09.104648   12181 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0314 11:02:09.090496   12181 out.go:291] Setting OutFile to fd 1 ...
I0314 11:02:09.090878   12181 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 11:02:09.090883   12181 out.go:304] Setting ErrFile to fd 2...
I0314 11:02:09.090889   12181 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 11:02:09.091020   12181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
I0314 11:02:09.091219   12181 mustload.go:65] Loading cluster: ha-068000
I0314 11:02:09.091402   12181 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 11:02:09.095672   12181 out.go:177] 
W0314 11:02:09.099696   12181 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0314 11:02:09.099700   12181 out.go:239] * 
* 
W0314 11:02:09.101695   12181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0314 11:02:09.104648   12181 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-068000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr: exit status 7 (32.545333ms)

                                                
                                                
-- stdout --
	ha-068000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:09.140507   12183 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:09.140692   12183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:09.140695   12183 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:09.140697   12183 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:09.140815   12183 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:09.140930   12183 out.go:298] Setting JSON to false
	I0314 11:02:09.140943   12183 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:09.141001   12183 notify.go:220] Checking for updates...
	I0314 11:02:09.141129   12183 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:09.141140   12183 status.go:255] checking status of ha-068000 ...
	I0314 11:02:09.141338   12183 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:09.141342   12183 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:09.141345   12183 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr: exit status 7 (71.4725ms)

                                                
                                                
-- stdout --
	ha-068000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:10.510389   12185 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:10.510608   12185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:10.510612   12185 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:10.510615   12185 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:10.510775   12185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:10.510928   12185 out.go:298] Setting JSON to false
	I0314 11:02:10.510939   12185 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:10.510966   12185 notify.go:220] Checking for updates...
	I0314 11:02:10.511194   12185 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:10.511202   12185 status.go:255] checking status of ha-068000 ...
	I0314 11:02:10.511476   12185 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:10.511481   12185 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:10.511484   12185 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr: exit status 7 (76.159208ms)

                                                
                                                
-- stdout --
	ha-068000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:12.838458   12187 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:12.838620   12187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:12.838624   12187 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:12.838627   12187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:12.838799   12187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:12.838944   12187 out.go:298] Setting JSON to false
	I0314 11:02:12.838954   12187 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:12.838982   12187 notify.go:220] Checking for updates...
	I0314 11:02:12.839186   12187 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:12.839192   12187 status.go:255] checking status of ha-068000 ...
	I0314 11:02:12.839436   12187 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:12.839441   12187 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:12.839444   12187 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr: exit status 7 (74.916292ms)

                                                
                                                
-- stdout --
	ha-068000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:14.994254   12192 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:14.994418   12192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:14.994422   12192 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:14.994425   12192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:14.994574   12192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:14.994721   12192 out.go:298] Setting JSON to false
	I0314 11:02:14.994732   12192 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:14.994775   12192 notify.go:220] Checking for updates...
	I0314 11:02:14.994984   12192 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:14.994994   12192 status.go:255] checking status of ha-068000 ...
	I0314 11:02:14.995258   12192 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:14.995263   12192 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:14.995265   12192 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr: exit status 7 (78.396917ms)

                                                
                                                
-- stdout --
	ha-068000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:19.383556   12194 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:19.383750   12194 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:19.383755   12194 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:19.383758   12194 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:19.383912   12194 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:19.384075   12194 out.go:298] Setting JSON to false
	I0314 11:02:19.384086   12194 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:19.384125   12194 notify.go:220] Checking for updates...
	I0314 11:02:19.384361   12194 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:19.384368   12194 status.go:255] checking status of ha-068000 ...
	I0314 11:02:19.384626   12194 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:19.384631   12194 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:19.384634   12194 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr: exit status 7 (76.085625ms)

                                                
                                                
-- stdout --
	ha-068000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:23.865971   12196 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:23.866138   12196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:23.866142   12196 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:23.866145   12196 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:23.866307   12196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:23.866476   12196 out.go:298] Setting JSON to false
	I0314 11:02:23.866487   12196 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:23.866524   12196 notify.go:220] Checking for updates...
	I0314 11:02:23.866722   12196 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:23.866729   12196 status.go:255] checking status of ha-068000 ...
	I0314 11:02:23.867018   12196 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:23.867023   12196 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:23.867026   12196 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr: exit status 7 (75.2805ms)

                                                
                                                
-- stdout --
	ha-068000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:28.876369   12198 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:28.876536   12198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:28.876540   12198 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:28.876543   12198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:28.876698   12198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:28.876854   12198 out.go:298] Setting JSON to false
	I0314 11:02:28.876865   12198 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:28.876895   12198 notify.go:220] Checking for updates...
	I0314 11:02:28.877102   12198 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:28.877109   12198 status.go:255] checking status of ha-068000 ...
	I0314 11:02:28.877353   12198 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:28.877358   12198 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:28.877361   12198 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr: exit status 7 (75.304041ms)

                                                
                                                
-- stdout --
	ha-068000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:44.421835   12205 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:44.421989   12205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:44.421994   12205 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:44.421996   12205 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:44.422148   12205 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:44.422305   12205 out.go:298] Setting JSON to false
	I0314 11:02:44.422316   12205 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:44.422349   12205 notify.go:220] Checking for updates...
	I0314 11:02:44.422552   12205 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:44.422558   12205 status.go:255] checking status of ha-068000 ...
	I0314 11:02:44.422816   12205 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:44.422821   12205 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:44.422824   12205 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (35.543ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (35.40s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-068000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-068000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-068000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-068000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-068000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-068000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-068000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-068000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (32.546291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (8.82s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-068000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-068000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-068000 -v=7 --alsologtostderr: (3.46173475s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-068000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-068000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.225991958s)

                                                
                                                
-- stdout --
	* [ha-068000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-068000" primary control-plane node in "ha-068000" cluster
	* Restarting existing qemu2 VM for "ha-068000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-068000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:48.132216   12237 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:48.132378   12237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:48.132382   12237 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:48.132385   12237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:48.132538   12237 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:48.133735   12237 out.go:298] Setting JSON to false
	I0314 11:02:48.153474   12237 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7340,"bootTime":1710432028,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:02:48.153546   12237 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:02:48.158784   12237 out.go:177] * [ha-068000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:02:48.166697   12237 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:02:48.170551   12237 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:02:48.166759   12237 notify.go:220] Checking for updates...
	I0314 11:02:48.176665   12237 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:02:48.179702   12237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:02:48.182737   12237 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:02:48.185650   12237 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:02:48.188983   12237 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:48.189048   12237 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:02:48.193578   12237 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:02:48.200679   12237 start.go:297] selected driver: qemu2
	I0314 11:02:48.200685   12237 start.go:901] validating driver "qemu2" against &{Name:ha-068000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-068000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:02:48.200757   12237 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:02:48.203335   12237 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:02:48.203370   12237 cni.go:84] Creating CNI manager for ""
	I0314 11:02:48.203375   12237 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 11:02:48.203423   12237 start.go:340] cluster config:
	{Name:ha-068000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-068000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:02:48.208341   12237 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:02:48.215686   12237 out.go:177] * Starting "ha-068000" primary control-plane node in "ha-068000" cluster
	I0314 11:02:48.218633   12237 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:02:48.218647   12237 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:02:48.218661   12237 cache.go:56] Caching tarball of preloaded images
	I0314 11:02:48.218716   12237 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:02:48.218723   12237 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:02:48.218785   12237 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/ha-068000/config.json ...
	I0314 11:02:48.219255   12237 start.go:360] acquireMachinesLock for ha-068000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:02:48.219294   12237 start.go:364] duration metric: took 31.416µs to acquireMachinesLock for "ha-068000"
	I0314 11:02:48.219305   12237 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:02:48.219311   12237 fix.go:54] fixHost starting: 
	I0314 11:02:48.219448   12237 fix.go:112] recreateIfNeeded on ha-068000: state=Stopped err=<nil>
	W0314 11:02:48.219459   12237 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:02:48.227539   12237 out.go:177] * Restarting existing qemu2 VM for "ha-068000" ...
	I0314 11:02:48.230724   12237 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:45:cc:23:8d:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2
	I0314 11:02:48.233168   12237 main.go:141] libmachine: STDOUT: 
	I0314 11:02:48.233192   12237 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:02:48.233227   12237 fix.go:56] duration metric: took 13.916458ms for fixHost
	I0314 11:02:48.233232   12237 start.go:83] releasing machines lock for "ha-068000", held for 13.93375ms
	W0314 11:02:48.233241   12237 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:02:48.233290   12237 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:02:48.233296   12237 start.go:728] Will try again in 5 seconds ...
	I0314 11:02:53.235404   12237 start.go:360] acquireMachinesLock for ha-068000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:02:53.235736   12237 start.go:364] duration metric: took 236.334µs to acquireMachinesLock for "ha-068000"
	I0314 11:02:53.235859   12237 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:02:53.235899   12237 fix.go:54] fixHost starting: 
	I0314 11:02:53.236539   12237 fix.go:112] recreateIfNeeded on ha-068000: state=Stopped err=<nil>
	W0314 11:02:53.236561   12237 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:02:53.240689   12237 out.go:177] * Restarting existing qemu2 VM for "ha-068000" ...
	I0314 11:02:53.245562   12237 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:45:cc:23:8d:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2
	I0314 11:02:53.253066   12237 main.go:141] libmachine: STDOUT: 
	I0314 11:02:53.253126   12237 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:02:53.253194   12237 fix.go:56] duration metric: took 17.321834ms for fixHost
	I0314 11:02:53.253209   12237 start.go:83] releasing machines lock for "ha-068000", held for 17.453291ms
	W0314 11:02:53.253398   12237 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-068000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-068000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:02:53.260610   12237 out.go:177] 
	W0314 11:02:53.263528   12237 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:02:53.263542   12237 out.go:239] * 
	* 
	W0314 11:02:53.265043   12237 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:02:53.275533   12237 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-068000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-068000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (35.045875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/RestartClusterKeepsNodes (8.82s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 node delete m03 -v=7 --alsologtostderr: exit status 83 (44.627166ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-068000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-068000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:53.422988   12251 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:53.423368   12251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:53.423372   12251 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:53.423374   12251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:53.423500   12251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:53.423712   12251 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:53.423909   12251 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:53.428537   12251 out.go:177] * The control-plane node ha-068000 host is not running: state=Stopped
	I0314 11:02:53.432516   12251 out.go:177]   To start a cluster, run: "minikube start -p ha-068000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-068000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr: exit status 7 (32.865542ms)

                                                
                                                
-- stdout --
	ha-068000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:53.468618   12253 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:53.468783   12253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:53.468786   12253 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:53.468788   12253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:53.468921   12253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:53.469042   12253 out.go:298] Setting JSON to false
	I0314 11:02:53.469050   12253 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:53.469110   12253 notify.go:220] Checking for updates...
	I0314 11:02:53.469230   12253 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:53.469235   12253 status.go:255] checking status of ha-068000 ...
	I0314 11:02:53.469435   12253 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:53.469438   12253 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:53.469440   12253 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (32.711208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-068000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-068000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-068000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-068000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (34.70075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (1.95s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-068000 stop -v=7 --alsologtostderr: (1.849579709s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr: exit status 7 (66.56075ms)

                                                
                                                
-- stdout --
	ha-068000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:55.529430   12273 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:55.529581   12273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:55.529585   12273 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:55.529590   12273 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:55.529755   12273 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:55.529899   12273 out.go:298] Setting JSON to false
	I0314 11:02:55.529909   12273 mustload.go:65] Loading cluster: ha-068000
	I0314 11:02:55.529938   12273 notify.go:220] Checking for updates...
	I0314 11:02:55.530129   12273 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:55.530135   12273 status.go:255] checking status of ha-068000 ...
	I0314 11:02:55.530375   12273 status.go:330] ha-068000 host status = "Stopped" (err=<nil>)
	I0314 11:02:55.530379   12273 status.go:343] host is not running, skipping remaining checks
	I0314 11:02:55.530382   12273 status.go:257] ha-068000 status: &{Name:ha-068000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr": ha-068000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr": ha-068000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-068000 status -v=7 --alsologtostderr": ha-068000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (33.919792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/StopCluster (1.95s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-068000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-068000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.185634125s)

                                                
                                                
-- stdout --
	* [ha-068000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-068000" primary control-plane node in "ha-068000" cluster
	* Restarting existing qemu2 VM for "ha-068000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-068000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:02:55.596899   12277 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:02:55.597029   12277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:55.597035   12277 out.go:304] Setting ErrFile to fd 2...
	I0314 11:02:55.597037   12277 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:02:55.597166   12277 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:02:55.598136   12277 out.go:298] Setting JSON to false
	I0314 11:02:55.614199   12277 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7347,"bootTime":1710432028,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:02:55.614269   12277 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:02:55.618526   12277 out.go:177] * [ha-068000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:02:55.625371   12277 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:02:55.629345   12277 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:02:55.625418   12277 notify.go:220] Checking for updates...
	I0314 11:02:55.636336   12277 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:02:55.639439   12277 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:02:55.642360   12277 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:02:55.645347   12277 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:02:55.648639   12277 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:02:55.648895   12277 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:02:55.653269   12277 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:02:55.660360   12277 start.go:297] selected driver: qemu2
	I0314 11:02:55.660366   12277 start.go:901] validating driver "qemu2" against &{Name:ha-068000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-068000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:02:55.660441   12277 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:02:55.662677   12277 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:02:55.662712   12277 cni.go:84] Creating CNI manager for ""
	I0314 11:02:55.662717   12277 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 11:02:55.662756   12277 start.go:340] cluster config:
	{Name:ha-068000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-068000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:02:55.667118   12277 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:02:55.674363   12277 out.go:177] * Starting "ha-068000" primary control-plane node in "ha-068000" cluster
	I0314 11:02:55.678386   12277 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:02:55.678411   12277 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:02:55.678423   12277 cache.go:56] Caching tarball of preloaded images
	I0314 11:02:55.678476   12277 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:02:55.678482   12277 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:02:55.678551   12277 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/ha-068000/config.json ...
	I0314 11:02:55.679003   12277 start.go:360] acquireMachinesLock for ha-068000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:02:55.679030   12277 start.go:364] duration metric: took 20.333µs to acquireMachinesLock for "ha-068000"
	I0314 11:02:55.679039   12277 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:02:55.679043   12277 fix.go:54] fixHost starting: 
	I0314 11:02:55.679158   12277 fix.go:112] recreateIfNeeded on ha-068000: state=Stopped err=<nil>
	W0314 11:02:55.679166   12277 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:02:55.687357   12277 out.go:177] * Restarting existing qemu2 VM for "ha-068000" ...
	I0314 11:02:55.691295   12277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:45:cc:23:8d:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2
	I0314 11:02:55.693352   12277 main.go:141] libmachine: STDOUT: 
	I0314 11:02:55.693378   12277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:02:55.693411   12277 fix.go:56] duration metric: took 14.366834ms for fixHost
	I0314 11:02:55.693416   12277 start.go:83] releasing machines lock for "ha-068000", held for 14.381833ms
	W0314 11:02:55.693422   12277 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:02:55.693450   12277 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:02:55.693455   12277 start.go:728] Will try again in 5 seconds ...
	I0314 11:03:00.695556   12277 start.go:360] acquireMachinesLock for ha-068000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:03:00.695910   12277 start.go:364] duration metric: took 267.458µs to acquireMachinesLock for "ha-068000"
	I0314 11:03:00.696045   12277 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:03:00.696066   12277 fix.go:54] fixHost starting: 
	I0314 11:03:00.696730   12277 fix.go:112] recreateIfNeeded on ha-068000: state=Stopped err=<nil>
	W0314 11:03:00.696762   12277 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:03:00.700536   12277 out.go:177] * Restarting existing qemu2 VM for "ha-068000" ...
	I0314 11:03:00.704485   12277 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:45:cc:23:8d:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/ha-068000/disk.qcow2
	I0314 11:03:00.714334   12277 main.go:141] libmachine: STDOUT: 
	I0314 11:03:00.714415   12277 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:03:00.714490   12277 fix.go:56] duration metric: took 18.426209ms for fixHost
	I0314 11:03:00.714502   12277 start.go:83] releasing machines lock for "ha-068000", held for 18.57225ms
	W0314 11:03:00.714638   12277 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-068000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-068000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:03:00.720920   12277 out.go:177] 
	W0314 11:03:00.724497   12277 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:03:00.724521   12277 out.go:239] * 
	* 
	W0314 11:03:00.727203   12277 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:03:00.735362   12277 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-068000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (71.417416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-068000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-068000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-068000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-068000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (32.2705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-068000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-068000 --control-plane -v=7 --alsologtostderr: exit status 83 (44.663292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-068000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-068000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:03:00.969548   12293 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:03:00.969685   12293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:03:00.969688   12293 out.go:304] Setting ErrFile to fd 2...
	I0314 11:03:00.969691   12293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:03:00.969799   12293 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:03:00.970011   12293 mustload.go:65] Loading cluster: ha-068000
	I0314 11:03:00.970181   12293 config.go:182] Loaded profile config "ha-068000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:03:00.974375   12293 out.go:177] * The control-plane node ha-068000 host is not running: state=Stopped
	I0314 11:03:00.978422   12293 out.go:177]   To start a cluster, run: "minikube start -p ha-068000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-068000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (32.379542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-068000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-068000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-068000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-068000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-068000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-068000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-068000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-068000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-068000 -n ha-068000: exit status 7 (31.766208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-068000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-834000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-834000 --driver=qemu2 : exit status 80 (9.775756334s)

                                                
                                                
-- stdout --
	* [image-834000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-834000" primary control-plane node in "image-834000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-834000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-834000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-834000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-834000 -n image-834000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-834000 -n image-834000: exit status 7 (71.032292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-834000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.96s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-620000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-620000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.957114125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b0fc9ae0-4faf-45ed-84f2-00ff31b3d1a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-620000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"656eff90-1809-4c38-927b-8495d9b02e90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18384"}}
	{"specversion":"1.0","id":"fe10bcec-3796-4a19-bdc9-50857e133d4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig"}}
	{"specversion":"1.0","id":"b311c5c3-f403-4579-8399-3ac63c6c1b80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9e6efa51-9735-47cd-8aa6-fd9791d7c4a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f6dec8e6-5301-4dfa-8016-b292e949739b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube"}}
	{"specversion":"1.0","id":"b3f3cb04-5956-4ef2-9eb9-44469cc5494e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7c31e987-5ddc-49dc-a10e-9b4a76113799","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"efbc100f-c88f-4886-a167-d08f88a346b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"93a68753-15f9-47c2-bd79-cf84743217f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-620000\" primary control-plane node in \"json-output-620000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0744aff1-ad45-466e-8e5b-d41b0fc0ad52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"33ef62e5-4ec8-4c40-bc35-29cd8b844b9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-620000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"596523df-3c20-44a3-856d-d451e28acf84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"1987d16d-ae25-4dae-becf-52916ab89683","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"23cdf933-0b57-494d-8d79-d34e46de600d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-620000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"a03c5511-7eaf-49d5-82b5-02abcae54f4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"e28d0413-1559-40fe-b559-ca25bc24f6cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-620000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.96s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-620000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-620000 --output=json --user=testUser: exit status 83 (81.642583ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"95a7b6a4-773b-48ee-ae21-f3d347a5beef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-620000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"d571e007-6e25-4e32-9d1d-4454138daa68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-620000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-620000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-620000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-620000 --output=json --user=testUser: exit status 83 (46.26625ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-620000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-620000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-620000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-620000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-827000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-827000 --driver=qemu2 : exit status 80 (9.858329542s)

                                                
                                                
-- stdout --
	* [first-827000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-827000" primary control-plane node in "first-827000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-827000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-827000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-827000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-14 11:03:34.999503 -0700 PDT m=+497.507134918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-828000 -n second-828000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-828000 -n second-828000: exit status 85 (83.641709ms)

                                                
                                                
-- stdout --
	* Profile "second-828000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-828000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-828000" host is not running, skipping log retrieval (state="* Profile \"second-828000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-828000\"")
helpers_test.go:175: Cleaning up "second-828000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-828000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-14 11:03:35.311427 -0700 PDT m=+497.819064876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-827000 -n first-827000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-827000 -n first-827000: exit status 7 (32.299083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-827000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-827000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-827000
--- FAIL: TestMinikubeProfile (10.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-520000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-520000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.508432667s)

                                                
                                                
-- stdout --
	* [mount-start-1-520000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-520000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-520000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-520000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-520000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-520000 -n mount-start-1-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-520000 -n mount-start-1-520000: exit status 7 (58.93075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-520000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.57s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-382000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-382000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.916595834s)

                                                
                                                
-- stdout --
	* [multinode-382000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-382000" primary control-plane node in "multinode-382000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-382000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:03:46.372862   12463 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:03:46.372995   12463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:03:46.372999   12463 out.go:304] Setting ErrFile to fd 2...
	I0314 11:03:46.373001   12463 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:03:46.373119   12463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:03:46.374212   12463 out.go:298] Setting JSON to false
	I0314 11:03:46.390310   12463 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7398,"bootTime":1710432028,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:03:46.390374   12463 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:03:46.396351   12463 out.go:177] * [multinode-382000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:03:46.403384   12463 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:03:46.407339   12463 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:03:46.403431   12463 notify.go:220] Checking for updates...
	I0314 11:03:46.413357   12463 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:03:46.416422   12463 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:03:46.419427   12463 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:03:46.422421   12463 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:03:46.425605   12463 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:03:46.429394   12463 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:03:46.436367   12463 start.go:297] selected driver: qemu2
	I0314 11:03:46.436373   12463 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:03:46.436381   12463 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:03:46.438738   12463 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:03:46.441326   12463 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:03:46.444435   12463 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:03:46.444476   12463 cni.go:84] Creating CNI manager for ""
	I0314 11:03:46.444480   12463 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0314 11:03:46.444489   12463 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 11:03:46.444526   12463 start.go:340] cluster config:
	{Name:multinode-382000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-382000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:03:46.448858   12463 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:03:46.456397   12463 out.go:177] * Starting "multinode-382000" primary control-plane node in "multinode-382000" cluster
	I0314 11:03:46.460255   12463 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:03:46.460271   12463 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:03:46.460289   12463 cache.go:56] Caching tarball of preloaded images
	I0314 11:03:46.460347   12463 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:03:46.460353   12463 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:03:46.460585   12463 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/multinode-382000/config.json ...
	I0314 11:03:46.460598   12463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/multinode-382000/config.json: {Name:mk967454f626925436a18bf8edc2597c2b6e75c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:03:46.460811   12463 start.go:360] acquireMachinesLock for multinode-382000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:03:46.460844   12463 start.go:364] duration metric: took 26.834µs to acquireMachinesLock for "multinode-382000"
	I0314 11:03:46.460857   12463 start.go:93] Provisioning new machine with config: &{Name:multinode-382000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-382000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:03:46.460882   12463 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:03:46.469173   12463 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:03:46.486904   12463 start.go:159] libmachine.API.Create for "multinode-382000" (driver="qemu2")
	I0314 11:03:46.486935   12463 client.go:168] LocalClient.Create starting
	I0314 11:03:46.486996   12463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:03:46.487026   12463 main.go:141] libmachine: Decoding PEM data...
	I0314 11:03:46.487036   12463 main.go:141] libmachine: Parsing certificate...
	I0314 11:03:46.487089   12463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:03:46.487111   12463 main.go:141] libmachine: Decoding PEM data...
	I0314 11:03:46.487119   12463 main.go:141] libmachine: Parsing certificate...
	I0314 11:03:46.487556   12463 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:03:46.631908   12463 main.go:141] libmachine: Creating SSH key...
	I0314 11:03:46.857711   12463 main.go:141] libmachine: Creating Disk image...
	I0314 11:03:46.857724   12463 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:03:46.857954   12463 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2
	I0314 11:03:46.870507   12463 main.go:141] libmachine: STDOUT: 
	I0314 11:03:46.870528   12463 main.go:141] libmachine: STDERR: 
	I0314 11:03:46.870585   12463 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2 +20000M
	I0314 11:03:46.881069   12463 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:03:46.881082   12463 main.go:141] libmachine: STDERR: 
	I0314 11:03:46.881094   12463 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2
	I0314 11:03:46.881099   12463 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:03:46.881130   12463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:4f:3b:80:d0:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2
	I0314 11:03:46.882832   12463 main.go:141] libmachine: STDOUT: 
	I0314 11:03:46.882853   12463 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:03:46.882871   12463 client.go:171] duration metric: took 395.938792ms to LocalClient.Create
	I0314 11:03:48.884983   12463 start.go:128] duration metric: took 2.424124958s to createHost
	I0314 11:03:48.885018   12463 start.go:83] releasing machines lock for "multinode-382000", held for 2.424210292s
	W0314 11:03:48.885057   12463 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:03:48.894532   12463 out.go:177] * Deleting "multinode-382000" in qemu2 ...
	W0314 11:03:48.921411   12463 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:03:48.921463   12463 start.go:728] Will try again in 5 seconds ...
	I0314 11:03:53.922909   12463 start.go:360] acquireMachinesLock for multinode-382000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:03:53.923245   12463 start.go:364] duration metric: took 263.125µs to acquireMachinesLock for "multinode-382000"
	I0314 11:03:53.923377   12463 start.go:93] Provisioning new machine with config: &{Name:multinode-382000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-382000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:03:53.923592   12463 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:03:53.932145   12463 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:03:53.980631   12463 start.go:159] libmachine.API.Create for "multinode-382000" (driver="qemu2")
	I0314 11:03:53.980679   12463 client.go:168] LocalClient.Create starting
	I0314 11:03:53.980780   12463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:03:53.980839   12463 main.go:141] libmachine: Decoding PEM data...
	I0314 11:03:53.980853   12463 main.go:141] libmachine: Parsing certificate...
	I0314 11:03:53.980918   12463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:03:53.980959   12463 main.go:141] libmachine: Decoding PEM data...
	I0314 11:03:53.980973   12463 main.go:141] libmachine: Parsing certificate...
	I0314 11:03:53.981468   12463 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:03:54.138541   12463 main.go:141] libmachine: Creating SSH key...
	I0314 11:03:54.186943   12463 main.go:141] libmachine: Creating Disk image...
	I0314 11:03:54.186949   12463 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:03:54.187192   12463 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2
	I0314 11:03:54.199355   12463 main.go:141] libmachine: STDOUT: 
	I0314 11:03:54.199374   12463 main.go:141] libmachine: STDERR: 
	I0314 11:03:54.199428   12463 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2 +20000M
	I0314 11:03:54.209862   12463 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:03:54.209878   12463 main.go:141] libmachine: STDERR: 
	I0314 11:03:54.209895   12463 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2
	I0314 11:03:54.209900   12463 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:03:54.209932   12463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:77:37:b2:20:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2
	I0314 11:03:54.211544   12463 main.go:141] libmachine: STDOUT: 
	I0314 11:03:54.211560   12463 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:03:54.211580   12463 client.go:171] duration metric: took 230.890708ms to LocalClient.Create
	I0314 11:03:56.213727   12463 start.go:128] duration metric: took 2.290153583s to createHost
	I0314 11:03:56.213777   12463 start.go:83] releasing machines lock for "multinode-382000", held for 2.290529417s
	W0314 11:03:56.214208   12463 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-382000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-382000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:03:56.228861   12463 out.go:177] 
	W0314 11:03:56.232748   12463 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:03:56.232774   12463 out.go:239] * 
	* 
	W0314 11:03:56.235702   12463 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:03:56.243752   12463 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-382000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (69.494959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (66.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (60.60275ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-382000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- rollout status deployment/busybox: exit status 1 (58.092375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.730042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.535709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.384875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.290333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.866167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.58225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.856292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.081583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.45325ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.909625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.134459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.495042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.344125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.916875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (32.742333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (66.85s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-382000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.246334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (33.183042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-382000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-382000 -v 3 --alsologtostderr: exit status 83 (46.733584ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-382000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-382000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:03.301587   12546 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:03.301922   12546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:03.301925   12546 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:03.301928   12546 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:03.302061   12546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:03.302260   12546 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:03.302461   12546 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:03.307054   12546 out.go:177] * The control-plane node multinode-382000 host is not running: state=Stopped
	I0314 11:05:03.310987   12546 out.go:177]   To start a cluster, run: "minikube start -p multinode-382000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-382000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (32.084625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-382000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-382000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.426917ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-382000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-382000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-382000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (32.607875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-382000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-382000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-382000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-382000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (32.669875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status --output json --alsologtostderr: exit status 7 (32.707209ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-382000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:03.546998   12559 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:03.547142   12559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:03.547145   12559 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:03.547148   12559 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:03.547266   12559 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:03.547372   12559 out.go:298] Setting JSON to true
	I0314 11:05:03.547382   12559 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:03.547444   12559 notify.go:220] Checking for updates...
	I0314 11:05:03.547564   12559 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:03.547569   12559 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:03.547755   12559 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:03.547759   12559 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:03.547761   12559 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-382000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (33.139875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 node stop m03: exit status 85 (50.783125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-382000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status: exit status 7 (33.313334ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status --alsologtostderr: exit status 7 (33.108083ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:03.700296   12567 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:03.700479   12567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:03.700482   12567 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:03.700484   12567 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:03.700614   12567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:03.700763   12567 out.go:298] Setting JSON to false
	I0314 11:05:03.700783   12567 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:03.700849   12567 notify.go:220] Checking for updates...
	I0314 11:05:03.700968   12567 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:03.700972   12567 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:03.701178   12567 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:03.701181   12567 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:03.701183   12567 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-382000 status --alsologtostderr": multinode-382000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (32.547125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (54.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 node start m03 -v=7 --alsologtostderr: exit status 85 (49.842125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:03.766028   12571 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:03.766361   12571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:03.766364   12571 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:03.766367   12571 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:03.766490   12571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:03.766683   12571 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:03.766876   12571 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:03.770999   12571 out.go:177] 
	W0314 11:05:03.774975   12571 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0314 11:05:03.774979   12571 out.go:239] * 
	* 
	W0314 11:05:03.777066   12571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:05:03.780990   12571 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0314 11:05:03.766028   12571 out.go:291] Setting OutFile to fd 1 ...
I0314 11:05:03.766361   12571 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 11:05:03.766364   12571 out.go:304] Setting ErrFile to fd 2...
I0314 11:05:03.766367   12571 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 11:05:03.766490   12571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
I0314 11:05:03.766683   12571 mustload.go:65] Loading cluster: multinode-382000
I0314 11:05:03.766876   12571 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0314 11:05:03.770999   12571 out.go:177] 
W0314 11:05:03.774975   12571 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0314 11:05:03.774979   12571 out.go:239] * 
* 
W0314 11:05:03.777066   12571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0314 11:05:03.780990   12571 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-382000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr: exit status 7 (32.725625ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:03.815982   12573 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:03.816111   12573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:03.816115   12573 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:03.816117   12573 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:03.816245   12573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:03.816368   12573 out.go:298] Setting JSON to false
	I0314 11:05:03.816377   12573 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:03.816440   12573 notify.go:220] Checking for updates...
	I0314 11:05:03.816569   12573 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:03.816574   12573 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:03.816766   12573 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:03.816770   12573 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:03.816772   12573 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr: exit status 7 (76.434459ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:04.481826   12575 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:04.482025   12575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:04.482029   12575 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:04.482032   12575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:04.482213   12575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:04.482387   12575 out.go:298] Setting JSON to false
	I0314 11:05:04.482398   12575 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:04.482480   12575 notify.go:220] Checking for updates...
	I0314 11:05:04.482669   12575 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:04.482676   12575 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:04.482936   12575 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:04.482940   12575 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:04.482944   12575 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr: exit status 7 (74.85775ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:05.844065   12577 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:05.844252   12577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:05.844256   12577 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:05.844260   12577 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:05.844411   12577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:05.844582   12577 out.go:298] Setting JSON to false
	I0314 11:05:05.844594   12577 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:05.844623   12577 notify.go:220] Checking for updates...
	I0314 11:05:05.844854   12577 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:05.844863   12577 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:05.845126   12577 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:05.845131   12577 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:05.845134   12577 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr: exit status 7 (76.420583ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:07.671202   12579 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:07.671396   12579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:07.671401   12579 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:07.671404   12579 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:07.671598   12579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:07.671759   12579 out.go:298] Setting JSON to false
	I0314 11:05:07.671770   12579 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:07.671807   12579 notify.go:220] Checking for updates...
	I0314 11:05:07.672031   12579 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:07.672037   12579 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:07.672304   12579 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:07.672309   12579 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:07.672312   12579 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr: exit status 7 (76.346416ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:09.517636   12581 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:09.517799   12581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:09.517803   12581 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:09.517806   12581 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:09.517958   12581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:09.518115   12581 out.go:298] Setting JSON to false
	I0314 11:05:09.518127   12581 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:09.518162   12581 notify.go:220] Checking for updates...
	I0314 11:05:09.518370   12581 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:09.518378   12581 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:09.518646   12581 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:09.518650   12581 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:09.518653   12581 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr: exit status 7 (76.600125ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:12.336236   12583 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:12.336389   12583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:12.336393   12583 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:12.336396   12583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:12.336538   12583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:12.336686   12583 out.go:298] Setting JSON to false
	I0314 11:05:12.336696   12583 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:12.336721   12583 notify.go:220] Checking for updates...
	I0314 11:05:12.336920   12583 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:12.336927   12583 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:12.337231   12583 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:12.337236   12583 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:12.337239   12583 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr: exit status 7 (76.975708ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:17.110694   12585 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:17.110871   12585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:17.110875   12585 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:17.110878   12585 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:17.111043   12585 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:17.111215   12585 out.go:298] Setting JSON to false
	I0314 11:05:17.111227   12585 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:17.111265   12585 notify.go:220] Checking for updates...
	I0314 11:05:17.111534   12585 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:17.111543   12585 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:17.111824   12585 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:17.111829   12585 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:17.111831   12585 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr: exit status 7 (74.730833ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:28.666105   12591 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:28.666270   12591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:28.666274   12591 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:28.666277   12591 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:28.666430   12591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:28.666597   12591 out.go:298] Setting JSON to false
	I0314 11:05:28.666610   12591 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:28.666648   12591 notify.go:220] Checking for updates...
	I0314 11:05:28.666872   12591 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:28.666879   12591 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:28.667179   12591 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:28.667184   12591 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:28.667187   12591 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr: exit status 7 (78.467625ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:42.273798   12595 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:42.273964   12595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:42.273969   12595 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:42.273972   12595 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:42.274131   12595 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:42.274277   12595 out.go:298] Setting JSON to false
	I0314 11:05:42.274289   12595 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:42.274324   12595 notify.go:220] Checking for updates...
	I0314 11:05:42.274562   12595 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:42.274568   12595 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:42.274897   12595 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:42.274902   12595 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:42.274905   12595 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr: exit status 7 (76.454917ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:05:57.924679   12599 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:05:57.924869   12599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:57.924873   12599 out.go:304] Setting ErrFile to fd 2...
	I0314 11:05:57.924877   12599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:05:57.925052   12599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:05:57.925214   12599 out.go:298] Setting JSON to false
	I0314 11:05:57.925225   12599 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:05:57.925275   12599 notify.go:220] Checking for updates...
	I0314 11:05:57.925508   12599 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:05:57.925515   12599 status.go:255] checking status of multinode-382000 ...
	I0314 11:05:57.925784   12599 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:05:57.925789   12599 status.go:343] host is not running, skipping remaining checks
	I0314 11:05:57.925792   12599 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-382000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (35.52ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (54.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (9.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-382000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-382000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-382000: (3.648757083s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-382000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-382000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.236920084s)

                                                
                                                
-- stdout --
	* [multinode-382000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-382000" primary control-plane node in "multinode-382000" cluster
	* Restarting existing qemu2 VM for "multinode-382000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-382000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:06:01.712111   12623 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:06:01.712275   12623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:06:01.712279   12623 out.go:304] Setting ErrFile to fd 2...
	I0314 11:06:01.712282   12623 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:06:01.712449   12623 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:06:01.713620   12623 out.go:298] Setting JSON to false
	I0314 11:06:01.732441   12623 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7533,"bootTime":1710432028,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:06:01.732509   12623 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:06:01.736621   12623 out.go:177] * [multinode-382000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:06:01.744599   12623 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:06:01.748532   12623 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:06:01.744633   12623 notify.go:220] Checking for updates...
	I0314 11:06:01.754537   12623 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:06:01.757541   12623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:06:01.760632   12623 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:06:01.763526   12623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:06:01.766905   12623 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:06:01.766959   12623 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:06:01.771588   12623 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:06:01.778526   12623 start.go:297] selected driver: qemu2
	I0314 11:06:01.778533   12623 start.go:901] validating driver "qemu2" against &{Name:multinode-382000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-382000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:06:01.778589   12623 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:06:01.781167   12623 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:06:01.781216   12623 cni.go:84] Creating CNI manager for ""
	I0314 11:06:01.781223   12623 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 11:06:01.781270   12623 start.go:340] cluster config:
	{Name:multinode-382000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-382000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:06:01.785951   12623 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:06:01.793512   12623 out.go:177] * Starting "multinode-382000" primary control-plane node in "multinode-382000" cluster
	I0314 11:06:01.797548   12623 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:06:01.797564   12623 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:06:01.797578   12623 cache.go:56] Caching tarball of preloaded images
	I0314 11:06:01.797647   12623 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:06:01.797654   12623 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:06:01.797733   12623 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/multinode-382000/config.json ...
	I0314 11:06:01.798174   12623 start.go:360] acquireMachinesLock for multinode-382000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:06:01.798207   12623 start.go:364] duration metric: took 27.541µs to acquireMachinesLock for "multinode-382000"
	I0314 11:06:01.798221   12623 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:06:01.798227   12623 fix.go:54] fixHost starting: 
	I0314 11:06:01.798350   12623 fix.go:112] recreateIfNeeded on multinode-382000: state=Stopped err=<nil>
	W0314 11:06:01.798359   12623 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:06:01.806566   12623 out.go:177] * Restarting existing qemu2 VM for "multinode-382000" ...
	I0314 11:06:01.814486   12623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:77:37:b2:20:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2
	I0314 11:06:01.817001   12623 main.go:141] libmachine: STDOUT: 
	I0314 11:06:01.817023   12623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:06:01.817055   12623 fix.go:56] duration metric: took 18.827875ms for fixHost
	I0314 11:06:01.817060   12623 start.go:83] releasing machines lock for "multinode-382000", held for 18.848833ms
	W0314 11:06:01.817069   12623 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:06:01.817104   12623 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:06:01.817109   12623 start.go:728] Will try again in 5 seconds ...
	I0314 11:06:06.819187   12623 start.go:360] acquireMachinesLock for multinode-382000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:06:06.819582   12623 start.go:364] duration metric: took 292.625µs to acquireMachinesLock for "multinode-382000"
	I0314 11:06:06.819699   12623 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:06:06.819719   12623 fix.go:54] fixHost starting: 
	I0314 11:06:06.820426   12623 fix.go:112] recreateIfNeeded on multinode-382000: state=Stopped err=<nil>
	W0314 11:06:06.820452   12623 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:06:06.825921   12623 out.go:177] * Restarting existing qemu2 VM for "multinode-382000" ...
	I0314 11:06:06.830099   12623 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:77:37:b2:20:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2
	I0314 11:06:06.843936   12623 main.go:141] libmachine: STDOUT: 
	I0314 11:06:06.844018   12623 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:06:06.844096   12623 fix.go:56] duration metric: took 24.378167ms for fixHost
	I0314 11:06:06.844117   12623 start.go:83] releasing machines lock for "multinode-382000", held for 24.512375ms
	W0314 11:06:06.844306   12623 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-382000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-382000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:06:06.852862   12623 out.go:177] 
	W0314 11:06:06.856943   12623 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:06:06.856967   12623 out.go:239] * 
	* 
	W0314 11:06:06.858453   12623 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:06:06.867827   12623 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-382000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-382000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (37.404166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (9.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 node delete m03: exit status 83 (44.073333ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-382000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-382000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-382000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status --alsologtostderr: exit status 7 (33.113958ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:06:07.069274   12637 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:06:07.069426   12637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:06:07.069430   12637 out.go:304] Setting ErrFile to fd 2...
	I0314 11:06:07.069432   12637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:06:07.069558   12637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:06:07.069684   12637 out.go:298] Setting JSON to false
	I0314 11:06:07.069696   12637 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:06:07.069757   12637 notify.go:220] Checking for updates...
	I0314 11:06:07.069886   12637 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:06:07.069891   12637 status.go:255] checking status of multinode-382000 ...
	I0314 11:06:07.070114   12637 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:06:07.070118   12637 status.go:343] host is not running, skipping remaining checks
	I0314 11:06:07.070120   12637 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-382000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (32.455167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-382000 stop: (3.178195417s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status: exit status 7 (68.735916ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-382000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-382000 status --alsologtostderr: exit status 7 (35.190292ms)

                                                
                                                
-- stdout --
	multinode-382000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:06:10.385864   12661 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:06:10.386010   12661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:06:10.386013   12661 out.go:304] Setting ErrFile to fd 2...
	I0314 11:06:10.386015   12661 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:06:10.386133   12661 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:06:10.386251   12661 out.go:298] Setting JSON to false
	I0314 11:06:10.386260   12661 mustload.go:65] Loading cluster: multinode-382000
	I0314 11:06:10.386310   12661 notify.go:220] Checking for updates...
	I0314 11:06:10.386477   12661 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:06:10.386482   12661 status.go:255] checking status of multinode-382000 ...
	I0314 11:06:10.386681   12661 status.go:330] multinode-382000 host status = "Stopped" (err=<nil>)
	I0314 11:06:10.386685   12661 status.go:343] host is not running, skipping remaining checks
	I0314 11:06:10.386687   12661 status.go:257] multinode-382000 status: &{Name:multinode-382000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-382000 status --alsologtostderr": multinode-382000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-382000 status --alsologtostderr": multinode-382000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (33.22175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-382000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-382000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.190731291s)

                                                
                                                
-- stdout --
	* [multinode-382000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-382000" primary control-plane node in "multinode-382000" cluster
	* Restarting existing qemu2 VM for "multinode-382000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-382000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:06:10.452618   12665 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:06:10.452741   12665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:06:10.452746   12665 out.go:304] Setting ErrFile to fd 2...
	I0314 11:06:10.452750   12665 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:06:10.452876   12665 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:06:10.453818   12665 out.go:298] Setting JSON to false
	I0314 11:06:10.470065   12665 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7542,"bootTime":1710432028,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:06:10.470132   12665 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:06:10.475548   12665 out.go:177] * [multinode-382000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:06:10.482471   12665 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:06:10.486482   12665 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:06:10.482546   12665 notify.go:220] Checking for updates...
	I0314 11:06:10.493443   12665 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:06:10.496547   12665 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:06:10.499461   12665 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:06:10.502475   12665 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:06:10.505833   12665 config.go:182] Loaded profile config "multinode-382000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:06:10.506105   12665 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:06:10.510455   12665 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:06:10.517495   12665 start.go:297] selected driver: qemu2
	I0314 11:06:10.517501   12665 start.go:901] validating driver "qemu2" against &{Name:multinode-382000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-382000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:06:10.517560   12665 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:06:10.520040   12665 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:06:10.520068   12665 cni.go:84] Creating CNI manager for ""
	I0314 11:06:10.520072   12665 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 11:06:10.520117   12665 start.go:340] cluster config:
	{Name:multinode-382000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-382000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:06:10.524469   12665 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:06:10.531453   12665 out.go:177] * Starting "multinode-382000" primary control-plane node in "multinode-382000" cluster
	I0314 11:06:10.535506   12665 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:06:10.535520   12665 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:06:10.535531   12665 cache.go:56] Caching tarball of preloaded images
	I0314 11:06:10.535579   12665 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:06:10.535585   12665 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:06:10.535641   12665 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/multinode-382000/config.json ...
	I0314 11:06:10.535993   12665 start.go:360] acquireMachinesLock for multinode-382000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:06:10.536023   12665 start.go:364] duration metric: took 21.833µs to acquireMachinesLock for "multinode-382000"
	I0314 11:06:10.536033   12665 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:06:10.536040   12665 fix.go:54] fixHost starting: 
	I0314 11:06:10.536158   12665 fix.go:112] recreateIfNeeded on multinode-382000: state=Stopped err=<nil>
	W0314 11:06:10.536168   12665 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:06:10.540473   12665 out.go:177] * Restarting existing qemu2 VM for "multinode-382000" ...
	I0314 11:06:10.548482   12665 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:77:37:b2:20:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2
	I0314 11:06:10.550585   12665 main.go:141] libmachine: STDOUT: 
	I0314 11:06:10.550610   12665 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:06:10.550648   12665 fix.go:56] duration metric: took 14.609833ms for fixHost
	I0314 11:06:10.550652   12665 start.go:83] releasing machines lock for "multinode-382000", held for 14.624666ms
	W0314 11:06:10.550659   12665 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:06:10.550687   12665 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:06:10.550692   12665 start.go:728] Will try again in 5 seconds ...
	I0314 11:06:15.552808   12665 start.go:360] acquireMachinesLock for multinode-382000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:06:15.553221   12665 start.go:364] duration metric: took 307.458µs to acquireMachinesLock for "multinode-382000"
	I0314 11:06:15.553364   12665 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:06:15.553418   12665 fix.go:54] fixHost starting: 
	I0314 11:06:15.554119   12665 fix.go:112] recreateIfNeeded on multinode-382000: state=Stopped err=<nil>
	W0314 11:06:15.554146   12665 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:06:15.559606   12665 out.go:177] * Restarting existing qemu2 VM for "multinode-382000" ...
	I0314 11:06:15.563764   12665 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:77:37:b2:20:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/multinode-382000/disk.qcow2
	I0314 11:06:15.573901   12665 main.go:141] libmachine: STDOUT: 
	I0314 11:06:15.573987   12665 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:06:15.574093   12665 fix.go:56] duration metric: took 20.708334ms for fixHost
	I0314 11:06:15.574118   12665 start.go:83] releasing machines lock for "multinode-382000", held for 20.868708ms
	W0314 11:06:15.574335   12665 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-382000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-382000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:06:15.582693   12665 out.go:177] 
	W0314 11:06:15.586606   12665 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:06:15.586658   12665 out.go:239] * 
	* 
	W0314 11:06:15.589183   12665 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:06:15.598528   12665 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-382000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (71.261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-382000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-382000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-382000-m01 --driver=qemu2 : exit status 80 (9.830697459s)

                                                
                                                
-- stdout --
	* [multinode-382000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-382000-m01" primary control-plane node in "multinode-382000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-382000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-382000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-382000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-382000-m02 --driver=qemu2 : exit status 80 (10.00201125s)

                                                
                                                
-- stdout --
	* [multinode-382000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-382000-m02" primary control-plane node in "multinode-382000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-382000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-382000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-382000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-382000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-382000: exit status 83 (81.372334ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-382000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-382000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-382000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-382000 -n multinode-382000: exit status 7 (32.239333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-382000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.09s)

                                                
                                    
x
+
TestPreload (10.12s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-009000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-009000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.938113709s)

                                                
                                                
-- stdout --
	* [test-preload-009000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-009000" primary control-plane node in "test-preload-009000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-009000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:06:35.938079   12726 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:06:35.938192   12726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:06:35.938194   12726 out.go:304] Setting ErrFile to fd 2...
	I0314 11:06:35.938197   12726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:06:35.938309   12726 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:06:35.939390   12726 out.go:298] Setting JSON to false
	I0314 11:06:35.955348   12726 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7567,"bootTime":1710432028,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:06:35.955413   12726 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:06:35.960194   12726 out.go:177] * [test-preload-009000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:06:35.967191   12726 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:06:35.972081   12726 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:06:35.967259   12726 notify.go:220] Checking for updates...
	I0314 11:06:35.978127   12726 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:06:35.981094   12726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:06:35.984213   12726 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:06:35.987148   12726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:06:35.988829   12726 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:06:35.988880   12726 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:06:35.993101   12726 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:06:35.999982   12726 start.go:297] selected driver: qemu2
	I0314 11:06:35.999987   12726 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:06:35.999993   12726 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:06:36.002197   12726 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:06:36.005079   12726 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:06:36.008191   12726 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:06:36.008215   12726 cni.go:84] Creating CNI manager for ""
	I0314 11:06:36.008221   12726 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:06:36.008226   12726 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:06:36.008253   12726 start.go:340] cluster config:
	{Name:test-preload-009000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:06:36.012636   12726 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:06:36.020137   12726 out.go:177] * Starting "test-preload-009000" primary control-plane node in "test-preload-009000" cluster
	I0314 11:06:36.024151   12726 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0314 11:06:36.024227   12726 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/test-preload-009000/config.json ...
	I0314 11:06:36.024250   12726 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/test-preload-009000/config.json: {Name:mkddaf5d9d84725b1e8ee7c993f34b915a3a892f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:06:36.024268   12726 cache.go:107] acquiring lock: {Name:mkb5d8b64feb3785748df6a1b45e61ff7bce7f59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:06:36.024309   12726 cache.go:107] acquiring lock: {Name:mkbab27b21a60e18d749428422b2b841d3f0c1a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:06:36.024354   12726 cache.go:107] acquiring lock: {Name:mkfb46ef3d2956e87d85ac63f6477f79de6157a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:06:36.024281   12726 cache.go:107] acquiring lock: {Name:mkc18810cdd32c73ef2721bb31bd43ac994d2e4a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:06:36.024384   12726 cache.go:107] acquiring lock: {Name:mkc8e10e8f17cacd37251820c522e9a1747d672e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:06:36.024420   12726 cache.go:107] acquiring lock: {Name:mkc0a6724a070708b5bc45ff5c6b9a039f801973 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:06:36.024560   12726 cache.go:107] acquiring lock: {Name:mkfe320460a8cf6a1c599234c2f09c96e6a3c3bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:06:36.024574   12726 start.go:360] acquireMachinesLock for test-preload-009000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:06:36.024602   12726 cache.go:107] acquiring lock: {Name:mkd002ab5da7e8f57990341696bd7bc4565c5ed7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:06:36.024615   12726 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0314 11:06:36.024674   12726 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:06:36.024705   12726 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:06:36.024723   12726 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0314 11:06:36.024746   12726 start.go:364] duration metric: took 159.416µs to acquireMachinesLock for "test-preload-009000"
	I0314 11:06:36.024543   12726 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0314 11:06:36.024761   12726 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0314 11:06:36.024763   12726 start.go:93] Provisioning new machine with config: &{Name:test-preload-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:06:36.024820   12726 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0314 11:06:36.024824   12726 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:06:36.029214   12726 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:06:36.024893   12726 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0314 11:06:36.034420   12726 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0314 11:06:36.034487   12726 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0314 11:06:36.035054   12726 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0314 11:06:36.035086   12726 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:06:36.037079   12726 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:06:36.037102   12726 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0314 11:06:36.037128   12726 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0314 11:06:36.037155   12726 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0314 11:06:36.046837   12726 start.go:159] libmachine.API.Create for "test-preload-009000" (driver="qemu2")
	I0314 11:06:36.046875   12726 client.go:168] LocalClient.Create starting
	I0314 11:06:36.046960   12726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:06:36.046990   12726 main.go:141] libmachine: Decoding PEM data...
	I0314 11:06:36.047022   12726 main.go:141] libmachine: Parsing certificate...
	I0314 11:06:36.047068   12726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:06:36.047091   12726 main.go:141] libmachine: Decoding PEM data...
	I0314 11:06:36.047115   12726 main.go:141] libmachine: Parsing certificate...
	I0314 11:06:36.047539   12726 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:06:36.219287   12726 main.go:141] libmachine: Creating SSH key...
	I0314 11:06:36.274991   12726 main.go:141] libmachine: Creating Disk image...
	I0314 11:06:36.275010   12726 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:06:36.275257   12726 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2
	I0314 11:06:36.287816   12726 main.go:141] libmachine: STDOUT: 
	I0314 11:06:36.287846   12726 main.go:141] libmachine: STDERR: 
	I0314 11:06:36.287930   12726 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2 +20000M
	I0314 11:06:36.300074   12726 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:06:36.300102   12726 main.go:141] libmachine: STDERR: 
	I0314 11:06:36.300117   12726 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2
	I0314 11:06:36.300122   12726 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:06:36.300157   12726 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:36:95:be:25:7b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2
	I0314 11:06:36.301981   12726 main.go:141] libmachine: STDOUT: 
	I0314 11:06:36.302001   12726 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:06:36.302020   12726 client.go:171] duration metric: took 255.144875ms to LocalClient.Create
	I0314 11:06:37.974636   12726 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0314 11:06:38.026781   12726 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0314 11:06:38.056024   12726 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0314 11:06:38.056109   12726 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0314 11:06:38.073027   12726 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0314 11:06:38.097160   12726 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0314 11:06:38.097318   12726 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0314 11:06:38.105627   12726 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0314 11:06:38.206941   12726 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0314 11:06:38.206994   12726 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.182706125s
	I0314 11:06:38.207013   12726 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0314 11:06:38.302240   12726 start.go:128] duration metric: took 2.277435792s to createHost
	I0314 11:06:38.302283   12726 start.go:83] releasing machines lock for "test-preload-009000", held for 2.277571083s
	W0314 11:06:38.302342   12726 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:06:38.315051   12726 out.go:177] * Deleting "test-preload-009000" in qemu2 ...
	W0314 11:06:38.341980   12726 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:06:38.342019   12726 start.go:728] Will try again in 5 seconds ...
	W0314 11:06:38.632564   12726 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0314 11:06:38.632641   12726 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 11:06:39.262210   12726 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0314 11:06:39.262326   12726 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.237960541s
	I0314 11:06:39.262352   12726 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0314 11:06:40.417053   12726 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0314 11:06:40.417102   12726 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.392796208s
	I0314 11:06:40.417128   12726 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0314 11:06:40.459671   12726 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0314 11:06:40.459705   12726 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.435522125s
	I0314 11:06:40.459721   12726 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0314 11:06:41.657953   12726 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0314 11:06:41.658003   12726 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.633833584s
	I0314 11:06:41.658030   12726 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0314 11:06:42.048998   12726 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0314 11:06:42.049043   12726 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.024891833s
	I0314 11:06:42.049067   12726 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0314 11:06:43.342065   12726 start.go:360] acquireMachinesLock for test-preload-009000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:06:43.342474   12726 start.go:364] duration metric: took 329.917µs to acquireMachinesLock for "test-preload-009000"
	I0314 11:06:43.342586   12726 start.go:93] Provisioning new machine with config: &{Name:test-preload-009000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-009000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:06:43.342783   12726 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:06:43.353315   12726 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:06:43.399866   12726 start.go:159] libmachine.API.Create for "test-preload-009000" (driver="qemu2")
	I0314 11:06:43.399912   12726 client.go:168] LocalClient.Create starting
	I0314 11:06:43.400045   12726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:06:43.400144   12726 main.go:141] libmachine: Decoding PEM data...
	I0314 11:06:43.400164   12726 main.go:141] libmachine: Parsing certificate...
	I0314 11:06:43.400274   12726 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:06:43.400322   12726 main.go:141] libmachine: Decoding PEM data...
	I0314 11:06:43.400339   12726 main.go:141] libmachine: Parsing certificate...
	I0314 11:06:43.400903   12726 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:06:43.535373   12726 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0314 11:06:43.535399   12726 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.510962541s
	I0314 11:06:43.535408   12726 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0314 11:06:43.563229   12726 main.go:141] libmachine: Creating SSH key...
	I0314 11:06:43.772865   12726 main.go:141] libmachine: Creating Disk image...
	I0314 11:06:43.772871   12726 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:06:43.773090   12726 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2
	I0314 11:06:43.790113   12726 main.go:141] libmachine: STDOUT: 
	I0314 11:06:43.790135   12726 main.go:141] libmachine: STDERR: 
	I0314 11:06:43.790196   12726 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2 +20000M
	I0314 11:06:43.801299   12726 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:06:43.801317   12726 main.go:141] libmachine: STDERR: 
	I0314 11:06:43.801329   12726 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2
	I0314 11:06:43.801332   12726 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:06:43.801375   12726 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:de:59:1a:9e:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/test-preload-009000/disk.qcow2
	I0314 11:06:43.803383   12726 main.go:141] libmachine: STDOUT: 
	I0314 11:06:43.803400   12726 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:06:43.803415   12726 client.go:171] duration metric: took 403.505042ms to LocalClient.Create
	I0314 11:06:44.691972   12726 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0314 11:06:44.692059   12726 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.667760458s
	I0314 11:06:44.692088   12726 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0314 11:06:44.692126   12726 cache.go:87] Successfully saved all images to host disk.
	I0314 11:06:45.805584   12726 start.go:128] duration metric: took 2.462824416s to createHost
	I0314 11:06:45.805627   12726 start.go:83] releasing machines lock for "test-preload-009000", held for 2.463179542s
	W0314 11:06:45.805881   12726 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-009000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:06:45.815257   12726 out.go:177] 
	W0314 11:06:45.820240   12726 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:06:45.820260   12726 out.go:239] * 
	* 
	W0314 11:06:45.821428   12726 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:06:45.832212   12726 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-009000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-14 11:06:45.847992 -0700 PDT m=+688.359214293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-009000 -n test-preload-009000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-009000 -n test-preload-009000: exit status 7 (72.564958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-009000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-009000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-009000
--- FAIL: TestPreload (10.12s)

                                                
                                    
x
+
TestScheduledStopUnix (9.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-101000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-101000 --memory=2048 --driver=qemu2 : exit status 80 (9.818537375s)

                                                
                                                
-- stdout --
	* [scheduled-stop-101000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-101000" primary control-plane node in "scheduled-stop-101000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-101000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-101000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-101000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-101000" primary control-plane node in "scheduled-stop-101000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-101000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-101000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-14 11:06:55.844124 -0700 PDT m=+698.355533835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-101000 -n scheduled-stop-101000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-101000 -n scheduled-stop-101000: exit status 7 (69.679125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-101000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-101000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-101000
--- FAIL: TestScheduledStopUnix (9.99s)

                                                
                                    
x
+
TestSkaffold (16.74s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe3826381047 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-914000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-914000 --memory=2600 --driver=qemu2 : exit status 80 (9.859073834s)

                                                
                                                
-- stdout --
	* [skaffold-914000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-914000" primary control-plane node in "skaffold-914000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-914000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-914000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-914000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-914000" primary control-plane node in "skaffold-914000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-914000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-914000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-14 11:07:12.581525 -0700 PDT m=+715.093250210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-914000 -n skaffold-914000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-914000 -n skaffold-914000: exit status 7 (65.357708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-914000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-914000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-914000
--- FAIL: TestSkaffold (16.74s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (633.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1592547219 start -p running-upgrade-636000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1592547219 start -p running-upgrade-636000 --memory=2200 --vm-driver=qemu2 : (1m20.207172542s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-636000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-636000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m35.560896334s)

                                                
                                                
-- stdout --
	* [running-upgrade-636000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-636000" primary control-plane node in "running-upgrade-636000" cluster
	* Updating the running qemu2 "running-upgrade-636000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:09:19.013835   13130 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:09:19.014152   13130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:09:19.014156   13130 out.go:304] Setting ErrFile to fd 2...
	I0314 11:09:19.014159   13130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:09:19.014294   13130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:09:19.015516   13130 out.go:298] Setting JSON to false
	I0314 11:09:19.035105   13130 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7731,"bootTime":1710432028,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:09:19.035180   13130 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:09:19.040742   13130 out.go:177] * [running-upgrade-636000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:09:19.049055   13130 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:09:19.049084   13130 notify.go:220] Checking for updates...
	I0314 11:09:19.057592   13130 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:09:19.061651   13130 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:09:19.064575   13130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:09:19.068593   13130 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:09:19.071633   13130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:09:19.075025   13130 config.go:182] Loaded profile config "running-upgrade-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:09:19.078610   13130 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 11:09:19.081794   13130 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:09:19.087568   13130 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:09:19.094472   13130 start.go:297] selected driver: qemu2
	I0314 11:09:19.094486   13130 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52128 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0314 11:09:19.094532   13130 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:09:19.097717   13130 cni.go:84] Creating CNI manager for ""
	I0314 11:09:19.097744   13130 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:09:19.097909   13130 start.go:340] cluster config:
	{Name:running-upgrade-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52128 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0314 11:09:19.097959   13130 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:09:19.106581   13130 out.go:177] * Starting "running-upgrade-636000" primary control-plane node in "running-upgrade-636000" cluster
	I0314 11:09:19.110574   13130 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0314 11:09:19.110590   13130 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0314 11:09:19.110622   13130 cache.go:56] Caching tarball of preloaded images
	I0314 11:09:19.110889   13130 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:09:19.110896   13130 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0314 11:09:19.110953   13130 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/config.json ...
	I0314 11:09:19.111353   13130 start.go:360] acquireMachinesLock for running-upgrade-636000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:09:19.111387   13130 start.go:364] duration metric: took 27.166µs to acquireMachinesLock for "running-upgrade-636000"
	I0314 11:09:19.111404   13130 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:09:19.111417   13130 fix.go:54] fixHost starting: 
	I0314 11:09:19.112074   13130 fix.go:112] recreateIfNeeded on running-upgrade-636000: state=Running err=<nil>
	W0314 11:09:19.112081   13130 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:09:19.119563   13130 out.go:177] * Updating the running qemu2 "running-upgrade-636000" VM ...
	I0314 11:09:19.123580   13130 machine.go:94] provisionDockerMachine start ...
	I0314 11:09:19.123622   13130 main.go:141] libmachine: Using SSH client type: native
	I0314 11:09:19.123903   13130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032b5bf0] 0x1032b8450 <nil>  [] 0s} localhost 52096 <nil> <nil>}
	I0314 11:09:19.123909   13130 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 11:09:19.180297   13130 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-636000
	
	I0314 11:09:19.180320   13130 buildroot.go:166] provisioning hostname "running-upgrade-636000"
	I0314 11:09:19.180377   13130 main.go:141] libmachine: Using SSH client type: native
	I0314 11:09:19.180490   13130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032b5bf0] 0x1032b8450 <nil>  [] 0s} localhost 52096 <nil> <nil>}
	I0314 11:09:19.180499   13130 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-636000 && echo "running-upgrade-636000" | sudo tee /etc/hostname
	I0314 11:09:19.244859   13130 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-636000
	
	I0314 11:09:19.244913   13130 main.go:141] libmachine: Using SSH client type: native
	I0314 11:09:19.245011   13130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032b5bf0] 0x1032b8450 <nil>  [] 0s} localhost 52096 <nil> <nil>}
	I0314 11:09:19.245018   13130 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-636000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-636000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-636000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 11:09:19.302822   13130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 11:09:19.302836   13130 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18384-10823/.minikube CaCertPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18384-10823/.minikube}
	I0314 11:09:19.302843   13130 buildroot.go:174] setting up certificates
	I0314 11:09:19.302847   13130 provision.go:84] configureAuth start
	I0314 11:09:19.302864   13130 provision.go:143] copyHostCerts
	I0314 11:09:19.302954   13130 exec_runner.go:144] found /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.pem, removing ...
	I0314 11:09:19.302969   13130 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.pem
	I0314 11:09:19.303081   13130 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.pem (1082 bytes)
	I0314 11:09:19.303231   13130 exec_runner.go:144] found /Users/jenkins/minikube-integration/18384-10823/.minikube/cert.pem, removing ...
	I0314 11:09:19.303234   13130 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18384-10823/.minikube/cert.pem
	I0314 11:09:19.303275   13130 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18384-10823/.minikube/cert.pem (1123 bytes)
	I0314 11:09:19.303369   13130 exec_runner.go:144] found /Users/jenkins/minikube-integration/18384-10823/.minikube/key.pem, removing ...
	I0314 11:09:19.303372   13130 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18384-10823/.minikube/key.pem
	I0314 11:09:19.303407   13130 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18384-10823/.minikube/key.pem (1675 bytes)
	I0314 11:09:19.303486   13130 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-636000 san=[127.0.0.1 localhost minikube running-upgrade-636000]
	I0314 11:09:19.497210   13130 provision.go:177] copyRemoteCerts
	I0314 11:09:19.497291   13130 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 11:09:19.497299   13130 sshutil.go:53] new ssh client: &{IP:localhost Port:52096 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/running-upgrade-636000/id_rsa Username:docker}
	I0314 11:09:19.529329   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 11:09:19.536652   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 11:09:19.543683   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0314 11:09:19.553202   13130 provision.go:87] duration metric: took 250.336167ms to configureAuth
	I0314 11:09:19.553215   13130 buildroot.go:189] setting minikube options for container-runtime
	I0314 11:09:19.553333   13130 config.go:182] Loaded profile config "running-upgrade-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:09:19.553371   13130 main.go:141] libmachine: Using SSH client type: native
	I0314 11:09:19.553465   13130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032b5bf0] 0x1032b8450 <nil>  [] 0s} localhost 52096 <nil> <nil>}
	I0314 11:09:19.553470   13130 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 11:09:19.613477   13130 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 11:09:19.613492   13130 buildroot.go:70] root file system type: tmpfs
	I0314 11:09:19.613543   13130 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 11:09:19.613589   13130 main.go:141] libmachine: Using SSH client type: native
	I0314 11:09:19.613694   13130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032b5bf0] 0x1032b8450 <nil>  [] 0s} localhost 52096 <nil> <nil>}
	I0314 11:09:19.613726   13130 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 11:09:19.676415   13130 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 11:09:19.676471   13130 main.go:141] libmachine: Using SSH client type: native
	I0314 11:09:19.676612   13130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032b5bf0] 0x1032b8450 <nil>  [] 0s} localhost 52096 <nil> <nil>}
	I0314 11:09:19.676621   13130 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 11:09:19.737487   13130 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 11:09:19.737498   13130 machine.go:97] duration metric: took 613.924792ms to provisionDockerMachine
	I0314 11:09:19.737504   13130 start.go:293] postStartSetup for "running-upgrade-636000" (driver="qemu2")
	I0314 11:09:19.737510   13130 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 11:09:19.737563   13130 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 11:09:19.737572   13130 sshutil.go:53] new ssh client: &{IP:localhost Port:52096 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/running-upgrade-636000/id_rsa Username:docker}
	I0314 11:09:19.768214   13130 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 11:09:19.769665   13130 info.go:137] Remote host: Buildroot 2021.02.12
	I0314 11:09:19.769674   13130 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18384-10823/.minikube/addons for local assets ...
	I0314 11:09:19.769731   13130 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18384-10823/.minikube/files for local assets ...
	I0314 11:09:19.769811   13130 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem -> 112382.pem in /etc/ssl/certs
	I0314 11:09:19.769903   13130 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 11:09:19.773036   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem --> /etc/ssl/certs/112382.pem (1708 bytes)
	I0314 11:09:19.779883   13130 start.go:296] duration metric: took 42.375ms for postStartSetup
	I0314 11:09:19.779896   13130 fix.go:56] duration metric: took 668.501333ms for fixHost
	I0314 11:09:19.779931   13130 main.go:141] libmachine: Using SSH client type: native
	I0314 11:09:19.780025   13130 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032b5bf0] 0x1032b8450 <nil>  [] 0s} localhost 52096 <nil> <nil>}
	I0314 11:09:19.780030   13130 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 11:09:19.839710   13130 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710439759.459903765
	
	I0314 11:09:19.839719   13130 fix.go:216] guest clock: 1710439759.459903765
	I0314 11:09:19.839723   13130 fix.go:229] Guest: 2024-03-14 11:09:19.459903765 -0700 PDT Remote: 2024-03-14 11:09:19.779898 -0700 PDT m=+0.862527501 (delta=-319.994235ms)
	I0314 11:09:19.839735   13130 fix.go:200] guest clock delta is within tolerance: -319.994235ms
	I0314 11:09:19.839738   13130 start.go:83] releasing machines lock for "running-upgrade-636000", held for 728.36075ms
	I0314 11:09:19.839798   13130 ssh_runner.go:195] Run: cat /version.json
	I0314 11:09:19.839807   13130 sshutil.go:53] new ssh client: &{IP:localhost Port:52096 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/running-upgrade-636000/id_rsa Username:docker}
	I0314 11:09:19.839817   13130 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 11:09:19.839840   13130 sshutil.go:53] new ssh client: &{IP:localhost Port:52096 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/running-upgrade-636000/id_rsa Username:docker}
	W0314 11:09:19.840539   13130 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52096: connect: connection refused
	I0314 11:09:19.840570   13130 retry.go:31] will retry after 355.989396ms: dial tcp [::1]:52096: connect: connection refused
	W0314 11:09:20.249588   13130 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0314 11:09:20.249826   13130 ssh_runner.go:195] Run: systemctl --version
	I0314 11:09:20.254458   13130 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 11:09:20.258004   13130 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 11:09:20.258065   13130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0314 11:09:20.264200   13130 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0314 11:09:20.272184   13130 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 11:09:20.272197   13130 start.go:494] detecting cgroup driver to use...
	I0314 11:09:20.273361   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 11:09:20.282125   13130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0314 11:09:20.286437   13130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 11:09:20.290310   13130 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 11:09:20.290347   13130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 11:09:20.294157   13130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 11:09:20.297670   13130 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 11:09:20.301446   13130 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 11:09:20.305186   13130 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 11:09:20.308627   13130 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 11:09:20.311972   13130 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 11:09:20.314499   13130 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 11:09:20.317577   13130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:09:20.409664   13130 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 11:09:20.416315   13130 start.go:494] detecting cgroup driver to use...
	I0314 11:09:20.416430   13130 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 11:09:20.421895   13130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 11:09:20.427218   13130 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 11:09:20.433626   13130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 11:09:20.438395   13130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 11:09:20.444367   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 11:09:20.449398   13130 ssh_runner.go:195] Run: which cri-dockerd
	I0314 11:09:20.450745   13130 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 11:09:20.453590   13130 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 11:09:20.459130   13130 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 11:09:20.540673   13130 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 11:09:20.631530   13130 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 11:09:20.631582   13130 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 11:09:20.637102   13130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:09:20.723034   13130 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 11:09:33.879995   13130 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.157190459s)
	I0314 11:09:33.880067   13130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 11:09:33.884796   13130 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0314 11:09:33.891909   13130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 11:09:33.896287   13130 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 11:09:33.953853   13130 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 11:09:34.035214   13130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:09:34.120756   13130 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 11:09:34.126963   13130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 11:09:34.131789   13130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:09:34.212626   13130 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 11:09:34.254683   13130 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 11:09:34.254782   13130 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 11:09:34.256694   13130 start.go:562] Will wait 60s for crictl version
	I0314 11:09:34.256735   13130 ssh_runner.go:195] Run: which crictl
	I0314 11:09:34.258235   13130 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 11:09:34.269909   13130 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0314 11:09:34.269975   13130 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 11:09:34.287469   13130 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 11:09:34.308458   13130 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0314 11:09:34.308523   13130 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0314 11:09:34.310097   13130 kubeadm.go:877] updating cluster {Name:running-upgrade-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52128 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0314 11:09:34.310146   13130 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0314 11:09:34.310184   13130 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 11:09:34.320715   13130 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 11:09:34.320723   13130 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0314 11:09:34.320769   13130 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 11:09:34.323995   13130 ssh_runner.go:195] Run: which lz4
	I0314 11:09:34.325169   13130 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0314 11:09:34.326364   13130 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 11:09:34.326374   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0314 11:09:35.025053   13130 docker.go:649] duration metric: took 699.924959ms to copy over tarball
	I0314 11:09:35.025127   13130 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 11:09:36.510615   13130 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.485503625s)
	I0314 11:09:36.510641   13130 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 11:09:36.525945   13130 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 11:09:36.529162   13130 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0314 11:09:36.534244   13130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:09:36.607225   13130 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 11:09:37.864888   13130 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.257668708s)
	I0314 11:09:37.864974   13130 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 11:09:37.877370   13130 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 11:09:37.877380   13130 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0314 11:09:37.877385   13130 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 11:09:37.887271   13130 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:09:37.887300   13130 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0314 11:09:37.887269   13130 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:09:37.887278   13130 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:09:37.887341   13130 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:09:37.887285   13130 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0314 11:09:37.887274   13130 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:09:37.887409   13130 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:09:37.893385   13130 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:09:37.893489   13130 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:09:37.893578   13130 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0314 11:09:37.893640   13130 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0314 11:09:37.893704   13130 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:09:37.893761   13130 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:09:37.893825   13130 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:09:37.893881   13130 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:09:39.864140   13130 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:09:39.899595   13130 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0314 11:09:39.900896   13130 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:09:39.901000   13130 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:09:39.914847   13130 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:09:39.935309   13130 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0314 11:09:39.938964   13130 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0314 11:09:39.938983   13130 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:09:39.939043   13130 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:09:39.951666   13130 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0314 11:09:39.960808   13130 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0314 11:09:39.969873   13130 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0314 11:09:39.970766   13130 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:09:39.973016   13130 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0314 11:09:39.973035   13130 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0314 11:09:39.973068   13130 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0314 11:09:39.974278   13130 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:09:39.983639   13130 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:09:39.985082   13130 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0314 11:09:39.985097   13130 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:09:39.985123   13130 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:09:39.990606   13130 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0314 11:09:39.990720   13130 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0314 11:09:39.992472   13130 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0314 11:09:39.992487   13130 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:09:39.992526   13130 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:09:40.001032   13130 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0314 11:09:40.001055   13130 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:09:40.001111   13130 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:09:40.004159   13130 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0314 11:09:40.004197   13130 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0314 11:09:40.004206   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0314 11:09:40.004269   13130 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0314 11:09:40.015550   13130 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0314 11:09:40.025728   13130 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0314 11:09:40.039420   13130 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0314 11:09:40.039428   13130 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0314 11:09:40.039454   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0314 11:09:40.048996   13130 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0314 11:09:40.049017   13130 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0314 11:09:40.049068   13130 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0314 11:09:40.088189   13130 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0314 11:09:40.088311   13130 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0314 11:09:40.109529   13130 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0314 11:09:40.109561   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0314 11:09:40.114805   13130 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0314 11:09:40.114817   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0314 11:09:40.189557   13130 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0314 11:09:40.189583   13130 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0314 11:09:40.189592   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0314 11:09:40.266903   13130 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0314 11:09:40.298729   13130 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0314 11:09:40.298746   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0314 11:09:40.429703   13130 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0314 11:09:40.484414   13130 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0314 11:09:40.484539   13130 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:09:40.496742   13130 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0314 11:09:40.496765   13130 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:09:40.496826   13130 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:09:41.503944   13130 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.0071095s)
	I0314 11:09:41.503978   13130 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 11:09:41.504366   13130 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0314 11:09:41.509229   13130 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0314 11:09:41.509290   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0314 11:09:41.565865   13130 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 11:09:41.565885   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0314 11:09:41.802969   13130 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 11:09:41.803007   13130 cache_images.go:92] duration metric: took 3.925688708s to LoadCachedImages
	W0314 11:09:41.803397   13130 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0314 11:09:41.803405   13130 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0314 11:09:41.803584   13130 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-636000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 11:09:41.803771   13130 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 11:09:41.817938   13130 cni.go:84] Creating CNI manager for ""
	I0314 11:09:41.817952   13130 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:09:41.818177   13130 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 11:09:41.818189   13130 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-636000 NodeName:running-upgrade-636000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 11:09:41.818250   13130 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-636000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 11:09:41.818297   13130 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0314 11:09:41.821280   13130 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 11:09:41.821303   13130 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 11:09:41.823819   13130 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0314 11:09:41.828352   13130 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 11:09:41.832997   13130 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0314 11:09:41.838249   13130 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0314 11:09:41.839687   13130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:09:41.923262   13130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 11:09:41.929508   13130 certs.go:68] Setting up /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000 for IP: 10.0.2.15
	I0314 11:09:41.929515   13130 certs.go:194] generating shared ca certs ...
	I0314 11:09:41.929524   13130 certs.go:226] acquiring lock for ca certs: {Name:mk6a5389e049f4ab73da9372eeaf63d358eca92f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:09:41.929737   13130 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.key
	I0314 11:09:41.929782   13130 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/proxy-client-ca.key
	I0314 11:09:41.929787   13130 certs.go:256] generating profile certs ...
	I0314 11:09:41.929860   13130 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/client.key
	I0314 11:09:41.929871   13130 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.key.d85832f1
	I0314 11:09:41.929881   13130 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.crt.d85832f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0314 11:09:42.030369   13130 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.crt.d85832f1 ...
	I0314 11:09:42.030378   13130 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.crt.d85832f1: {Name:mkdf853b35c528a75b69b9a55ec756219ef4bfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:09:42.030586   13130 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.key.d85832f1 ...
	I0314 11:09:42.030591   13130 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.key.d85832f1: {Name:mk581b1bc21bb1e334e702e4dab747be425c84a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:09:42.030702   13130 certs.go:381] copying /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.crt.d85832f1 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.crt
	I0314 11:09:42.030821   13130 certs.go:385] copying /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.key.d85832f1 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.key
	I0314 11:09:42.030950   13130 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/proxy-client.key
	I0314 11:09:42.031064   13130 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/11238.pem (1338 bytes)
	W0314 11:09:42.031092   13130 certs.go:480] ignoring /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/11238_empty.pem, impossibly tiny 0 bytes
	I0314 11:09:42.031097   13130 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 11:09:42.031122   13130 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem (1082 bytes)
	I0314 11:09:42.031146   13130 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem (1123 bytes)
	I0314 11:09:42.031172   13130 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/key.pem (1675 bytes)
	I0314 11:09:42.031221   13130 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem (1708 bytes)
	I0314 11:09:42.031916   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 11:09:42.039493   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 11:09:42.046520   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 11:09:42.053180   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 11:09:42.060747   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 11:09:42.068375   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 11:09:42.075752   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 11:09:42.083168   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 11:09:42.089598   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem --> /usr/share/ca-certificates/112382.pem (1708 bytes)
	I0314 11:09:42.096673   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 11:09:42.104144   13130 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/11238.pem --> /usr/share/ca-certificates/11238.pem (1338 bytes)
	I0314 11:09:42.110768   13130 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 11:09:42.115727   13130 ssh_runner.go:195] Run: openssl version
	I0314 11:09:42.117658   13130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112382.pem && ln -fs /usr/share/ca-certificates/112382.pem /etc/ssl/certs/112382.pem"
	I0314 11:09:42.121261   13130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112382.pem
	I0314 11:09:42.122830   13130 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:57 /usr/share/ca-certificates/112382.pem
	I0314 11:09:42.122850   13130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112382.pem
	I0314 11:09:42.124637   13130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112382.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 11:09:42.127582   13130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 11:09:42.130592   13130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 11:09:42.132100   13130 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:09 /usr/share/ca-certificates/minikubeCA.pem
	I0314 11:09:42.132119   13130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 11:09:42.134026   13130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 11:09:42.137124   13130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11238.pem && ln -fs /usr/share/ca-certificates/11238.pem /etc/ssl/certs/11238.pem"
	I0314 11:09:42.140684   13130 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11238.pem
	I0314 11:09:42.142287   13130 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:57 /usr/share/ca-certificates/11238.pem
	I0314 11:09:42.142309   13130 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11238.pem
	I0314 11:09:42.144414   13130 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11238.pem /etc/ssl/certs/51391683.0"
	I0314 11:09:42.147787   13130 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 11:09:42.149327   13130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 11:09:42.151295   13130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 11:09:42.153155   13130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 11:09:42.154945   13130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 11:09:42.157723   13130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 11:09:42.159631   13130 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 11:09:42.161616   13130 kubeadm.go:391] StartCluster: {Name:running-upgrade-636000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52128 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-636000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0314 11:09:42.161685   13130 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 11:09:42.172479   13130 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 11:09:42.175717   13130 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 11:09:42.175723   13130 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 11:09:42.175726   13130 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 11:09:42.175749   13130 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 11:09:42.178492   13130 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 11:09:42.178679   13130 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-636000" does not appear in /Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:09:42.178696   13130 kubeconfig.go:62] /Users/jenkins/minikube-integration/18384-10823/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-636000" cluster setting kubeconfig missing "running-upgrade-636000" context setting]
	I0314 11:09:42.178857   13130 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/kubeconfig: {Name:mk22117ed76e85ca64a0d4fa77d593f7fc7d1176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:09:42.180362   13130 kapi.go:59] client config for running-upgrade-636000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/client.key", CAFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1045a4630), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 11:09:42.184286   13130 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 11:09:42.187319   13130 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-636000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0314 11:09:42.187324   13130 kubeadm.go:1153] stopping kube-system containers ...
	I0314 11:09:42.187360   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 11:09:42.201121   13130 docker.go:483] Stopping containers: [a5ab5b5c7f39 9d7a4cadc255 ecd9237272db 71a0c47a9be2 752181c4e427 3cf0e5f44ef4 ffda7ab9d6f7 851b96574861 32036b13d627 7643eded12f3 099ba80bf5dd 37d3c63d46cc c6649b73b85b]
	I0314 11:09:42.201190   13130 ssh_runner.go:195] Run: docker stop a5ab5b5c7f39 9d7a4cadc255 ecd9237272db 71a0c47a9be2 752181c4e427 3cf0e5f44ef4 ffda7ab9d6f7 851b96574861 32036b13d627 7643eded12f3 099ba80bf5dd 37d3c63d46cc c6649b73b85b
	I0314 11:09:42.217356   13130 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 11:09:42.325824   13130 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 11:09:42.330563   13130 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar 14 18:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Mar 14 18:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 14 18:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Mar 14 18:09 /etc/kubernetes/scheduler.conf
	
	I0314 11:09:42.330592   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/admin.conf
	I0314 11:09:42.333958   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0314 11:09:42.333989   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 11:09:42.337537   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/kubelet.conf
	I0314 11:09:42.340852   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0314 11:09:42.340873   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 11:09:42.344551   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/controller-manager.conf
	I0314 11:09:42.348094   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0314 11:09:42.348123   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 11:09:42.351278   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/scheduler.conf
	I0314 11:09:42.354053   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0314 11:09:42.354070   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 11:09:42.357021   13130 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 11:09:42.360367   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:09:42.385023   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:09:42.898021   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:09:43.257440   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:09:43.277590   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:09:43.300628   13130 api_server.go:52] waiting for apiserver process to appear ...
	I0314 11:09:43.300710   13130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:09:43.803037   13130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:09:44.302749   13130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:09:44.310616   13130 api_server.go:72] duration metric: took 1.010149167s to wait for apiserver process to appear ...
	I0314 11:09:44.310623   13130 api_server.go:88] waiting for apiserver healthz status ...
	I0314 11:09:44.310631   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:09:49.313062   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:09:49.313113   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:09:54.313621   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:09:54.313709   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:09:59.314363   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:09:59.314383   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:10:04.315045   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:10:04.315123   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:10:09.316284   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:10:09.316357   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:10:14.318198   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:10:14.318292   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:10:19.320472   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:10:19.320566   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:10:24.323233   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:10:24.323314   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:10:29.325831   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:10:29.325884   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:10:34.328235   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:10:34.328309   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:10:39.330737   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:10:39.330826   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:10:44.333255   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:10:44.334335   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:10:44.347655   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:10:44.347720   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:10:44.358140   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:10:44.358211   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:10:44.369554   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:10:44.369636   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:10:44.384149   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:10:44.384221   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:10:44.394646   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:10:44.394714   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:10:44.405602   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:10:44.405673   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:10:44.415387   13130 logs.go:276] 0 containers: []
	W0314 11:10:44.415398   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:10:44.415459   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:10:44.426038   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:10:44.426054   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:10:44.426058   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:10:44.441776   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:10:44.441786   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:10:44.453291   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:10:44.453305   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:10:44.458058   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:10:44.458068   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:10:44.472327   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:10:44.472337   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:10:44.485108   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:10:44.485126   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:10:44.496052   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:10:44.496063   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:10:44.531825   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:10:44.531834   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:10:44.545330   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:10:44.545339   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:10:44.557023   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:10:44.557032   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:10:44.581731   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:10:44.581739   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:10:44.593552   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:10:44.593562   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:10:44.686049   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:10:44.686063   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:10:44.697796   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:10:44.697806   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:10:44.709717   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:10:44.709726   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:10:44.720544   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:10:44.720553   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:10:44.736315   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:10:44.736325   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:10:47.260173   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:10:52.262565   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:10:52.262926   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:10:52.295466   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:10:52.295627   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:10:52.320199   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:10:52.320316   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:10:52.336273   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:10:52.336351   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:10:52.349072   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:10:52.349144   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:10:52.359586   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:10:52.359654   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:10:52.369924   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:10:52.369992   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:10:52.379621   13130 logs.go:276] 0 containers: []
	W0314 11:10:52.379631   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:10:52.379684   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:10:52.393923   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:10:52.393942   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:10:52.393948   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:10:52.408134   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:10:52.408143   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:10:52.423181   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:10:52.423212   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:10:52.434440   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:10:52.434449   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:10:52.446031   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:10:52.446041   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:10:52.457325   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:10:52.457334   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:10:52.470981   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:10:52.470990   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:10:52.483279   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:10:52.483287   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:10:52.500480   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:10:52.500490   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:10:52.512377   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:10:52.512388   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:10:52.527650   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:10:52.527658   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:10:52.539093   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:10:52.539103   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:10:52.564577   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:10:52.564584   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:10:52.575904   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:10:52.575916   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:10:52.614359   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:10:52.614366   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:10:52.618418   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:10:52.618425   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:10:52.655830   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:10:52.655841   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:10:55.170357   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:11:00.173165   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:11:00.173565   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:11:00.210224   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:11:00.210375   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:11:00.233543   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:11:00.233645   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:11:00.252451   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:11:00.252520   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:11:00.264844   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:11:00.264912   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:11:00.275275   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:11:00.275349   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:11:00.286058   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:11:00.286137   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:11:00.296541   13130 logs.go:276] 0 containers: []
	W0314 11:11:00.296550   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:11:00.296600   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:11:00.308902   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:11:00.308925   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:11:00.308930   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:11:00.320768   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:11:00.320781   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:11:00.332247   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:11:00.332266   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:11:00.347529   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:11:00.347541   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:11:00.359925   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:11:00.359934   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:11:00.381812   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:11:00.381826   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:11:00.395921   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:11:00.395931   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:11:00.412015   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:11:00.412024   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:11:00.423295   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:11:00.423306   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:11:00.435156   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:11:00.435167   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:11:00.453206   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:11:00.453216   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:11:00.457807   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:11:00.457813   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:11:00.495287   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:11:00.495301   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:11:00.506473   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:11:00.506484   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:11:00.523802   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:11:00.523813   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:11:00.534990   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:11:00.535001   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:11:00.559320   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:11:00.559327   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:11:03.100442   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:11:08.103216   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:11:08.103538   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:11:08.136463   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:11:08.136612   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:11:08.159963   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:11:08.160089   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:11:08.176206   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:11:08.176284   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:11:08.189136   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:11:08.189198   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:11:08.200308   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:11:08.200377   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:11:08.211174   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:11:08.211241   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:11:08.221386   13130 logs.go:276] 0 containers: []
	W0314 11:11:08.221399   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:11:08.221453   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:11:08.232114   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:11:08.232135   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:11:08.232141   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:11:08.247436   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:11:08.247447   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:11:08.251892   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:11:08.251898   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:11:08.265437   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:11:08.265446   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:11:08.276648   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:11:08.276659   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:11:08.288176   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:11:08.288187   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:11:08.299846   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:11:08.299859   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:11:08.311657   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:11:08.311668   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:11:08.336155   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:11:08.336161   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:11:08.347707   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:11:08.347716   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:11:08.389497   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:11:08.389510   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:11:08.403708   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:11:08.403719   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:11:08.417284   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:11:08.417295   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:11:08.428496   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:11:08.428506   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:11:08.450606   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:11:08.450618   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:11:08.488625   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:11:08.488636   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:11:08.503295   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:11:08.503305   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:11:11.019777   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:11:16.020236   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:11:16.020515   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:11:16.048249   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:11:16.048364   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:11:16.065318   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:11:16.065408   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:11:16.078711   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:11:16.078782   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:11:16.089898   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:11:16.089977   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:11:16.100277   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:11:16.100344   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:11:16.115458   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:11:16.115539   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:11:16.125442   13130 logs.go:276] 0 containers: []
	W0314 11:11:16.125453   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:11:16.125503   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:11:16.136158   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:11:16.136174   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:11:16.136180   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:11:16.141003   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:11:16.141010   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:11:16.152064   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:11:16.152074   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:11:16.164821   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:11:16.164830   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:11:16.191330   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:11:16.191338   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:11:16.203197   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:11:16.203208   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:11:16.218471   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:11:16.218483   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:11:16.230217   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:11:16.230228   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:11:16.267376   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:11:16.267384   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:11:16.281047   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:11:16.281058   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:11:16.294539   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:11:16.294547   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:11:16.308703   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:11:16.308715   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:11:16.320788   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:11:16.320799   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:11:16.332179   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:11:16.332189   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:11:16.369658   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:11:16.369670   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:11:16.382163   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:11:16.382173   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:11:16.393630   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:11:16.393648   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:11:18.913060   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:11:23.915321   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:11:23.915449   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:11:23.927555   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:11:23.927632   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:11:23.938522   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:11:23.938591   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:11:23.949059   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:11:23.949125   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:11:23.959823   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:11:23.959890   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:11:23.970674   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:11:23.970744   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:11:23.981626   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:11:23.981693   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:11:23.991386   13130 logs.go:276] 0 containers: []
	W0314 11:11:23.991397   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:11:23.991452   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:11:24.002007   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:11:24.002025   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:11:24.002031   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:11:24.014600   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:11:24.017100   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:11:24.028888   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:11:24.028900   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:11:24.033682   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:11:24.033691   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:11:24.047982   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:11:24.047992   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:11:24.062515   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:11:24.062525   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:11:24.099743   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:11:24.099756   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:11:24.111859   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:11:24.111879   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:11:24.127217   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:11:24.127226   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:11:24.139568   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:11:24.139580   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:11:24.151814   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:11:24.151825   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:11:24.163638   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:11:24.163648   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:11:24.188134   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:11:24.188145   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:11:24.225209   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:11:24.225217   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:11:24.238993   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:11:24.239003   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:11:24.250681   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:11:24.250695   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:11:24.268842   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:11:24.268853   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:11:26.782876   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:11:31.785097   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:11:31.785266   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:11:31.797548   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:11:31.797628   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:11:31.810274   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:11:31.810346   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:11:31.822836   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:11:31.822914   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:11:31.833763   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:11:31.833838   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:11:31.844571   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:11:31.844641   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:11:31.855764   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:11:31.855837   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:11:31.866329   13130 logs.go:276] 0 containers: []
	W0314 11:11:31.866342   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:11:31.866404   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:11:31.877178   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:11:31.877197   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:11:31.877205   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:11:31.889138   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:11:31.889152   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:11:31.906830   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:11:31.906842   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:11:31.919345   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:11:31.919356   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:11:31.931224   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:11:31.931244   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:11:31.936614   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:11:31.936621   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:11:31.951261   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:11:31.951271   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:11:31.964032   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:11:31.964043   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:11:31.976398   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:11:31.976410   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:11:31.992326   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:11:31.992337   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:11:32.030179   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:11:32.030192   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:11:32.044342   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:11:32.044353   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:11:32.056242   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:11:32.056254   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:11:32.082753   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:11:32.082764   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:11:32.094613   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:11:32.094625   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:11:32.133062   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:11:32.133070   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:11:32.148000   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:11:32.148011   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:11:34.662234   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:11:39.662776   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:11:39.662878   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:11:39.675310   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:11:39.675388   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:11:39.688485   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:11:39.688565   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:11:39.703054   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:11:39.703134   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:11:39.716902   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:11:39.716978   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:11:39.729639   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:11:39.729725   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:11:39.742419   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:11:39.742500   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:11:39.756168   13130 logs.go:276] 0 containers: []
	W0314 11:11:39.756179   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:11:39.756248   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:11:39.769208   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:11:39.769227   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:11:39.769234   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:11:39.782513   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:11:39.782527   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:11:39.798252   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:11:39.798266   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:11:39.851098   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:11:39.851111   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:11:39.873816   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:11:39.873829   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:11:39.890576   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:11:39.890589   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:11:39.904387   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:11:39.904401   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:11:39.917336   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:11:39.917351   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:11:39.930800   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:11:39.930812   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:11:39.971773   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:11:39.971788   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:11:39.977256   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:11:39.977269   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:11:39.993083   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:11:39.993098   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:11:40.006332   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:11:40.006344   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:11:40.033769   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:11:40.033786   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:11:40.047783   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:11:40.047797   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:11:40.065748   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:11:40.065760   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:11:40.082967   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:11:40.082981   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:11:42.604320   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:11:47.606780   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:11:47.607710   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:11:47.634397   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:11:47.634485   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:11:47.645533   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:11:47.645613   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:11:47.656227   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:11:47.656296   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:11:47.670957   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:11:47.671038   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:11:47.682062   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:11:47.682175   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:11:47.694143   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:11:47.694209   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:11:47.704982   13130 logs.go:276] 0 containers: []
	W0314 11:11:47.704995   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:11:47.705059   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:11:47.715972   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:11:47.716006   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:11:47.716014   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:11:47.731206   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:11:47.731218   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:11:47.744778   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:11:47.744790   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:11:47.757296   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:11:47.757308   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:11:47.772661   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:11:47.772672   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:11:47.785747   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:11:47.785758   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:11:47.831756   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:11:47.831766   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:11:47.847492   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:11:47.847504   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:11:47.887870   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:11:47.887892   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:11:47.908738   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:11:47.908757   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:11:47.920575   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:11:47.920585   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:11:47.932666   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:11:47.932677   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:11:47.944836   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:11:47.944847   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:11:47.958907   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:11:47.958917   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:11:47.976898   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:11:47.976922   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:11:48.005859   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:11:48.005876   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:11:48.022342   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:11:48.022353   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:11:50.529587   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:11:55.532141   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:11:55.532232   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:11:55.543281   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:11:55.543365   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:11:55.554402   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:11:55.554470   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:11:55.566889   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:11:55.566982   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:11:55.577993   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:11:55.578068   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:11:55.589329   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:11:55.589404   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:11:55.600242   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:11:55.600313   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:11:55.610596   13130 logs.go:276] 0 containers: []
	W0314 11:11:55.610610   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:11:55.610676   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:11:55.621863   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:11:55.621885   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:11:55.621891   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:11:55.639419   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:11:55.639429   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:11:55.651616   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:11:55.651629   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:11:55.675788   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:11:55.675800   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:11:55.703081   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:11:55.703101   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:11:55.715473   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:11:55.715485   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:11:55.727873   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:11:55.727887   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:11:55.769138   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:11:55.769151   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:11:55.788863   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:11:55.788878   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:11:55.804558   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:11:55.804567   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:11:55.817931   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:11:55.817941   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:11:55.832922   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:11:55.832935   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:11:55.847786   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:11:55.847798   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:11:55.860078   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:11:55.860090   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:11:55.864672   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:11:55.864679   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:11:55.902069   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:11:55.902085   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:11:55.916440   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:11:55.916452   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:11:58.436234   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:03.438415   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:03.438532   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:03.450466   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:03.450550   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:03.462672   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:03.462745   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:03.473799   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:03.473868   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:03.484256   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:03.484349   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:03.495850   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:03.495919   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:03.509980   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:03.510053   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:03.520301   13130 logs.go:276] 0 containers: []
	W0314 11:12:03.520313   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:03.520375   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:03.531702   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:03.531722   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:03.531754   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:03.544361   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:03.544376   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:03.556052   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:03.556063   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:03.567526   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:03.567537   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:03.583093   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:03.583107   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:03.587880   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:03.587887   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:03.602879   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:03.602892   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:03.617303   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:03.617314   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:03.677522   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:03.677538   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:03.716034   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:03.716045   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:03.736601   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:03.736613   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:03.757536   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:03.757547   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:03.772723   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:03.772734   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:03.788154   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:03.788165   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:03.801038   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:03.801051   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:03.838251   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:03.838263   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:03.850711   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:03.850720   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:06.364606   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:11.366835   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:11.367228   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:11.398865   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:11.399014   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:11.417357   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:11.417443   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:11.430615   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:11.430694   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:11.446085   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:11.446164   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:11.457062   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:11.457125   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:11.467812   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:11.467882   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:11.477546   13130 logs.go:276] 0 containers: []
	W0314 11:12:11.477562   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:11.477623   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:11.488091   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:11.488110   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:11.488116   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:11.499499   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:11.499510   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:11.518170   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:11.518181   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:11.530172   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:11.530181   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:11.566956   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:11.566963   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:11.584476   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:11.584487   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:11.596018   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:11.596028   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:11.600256   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:11.600263   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:11.613741   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:11.613751   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:11.627967   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:11.627976   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:11.639822   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:11.639839   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:11.654516   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:11.654527   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:11.666401   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:11.666413   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:11.684118   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:11.684128   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:11.721332   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:11.721342   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:11.733678   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:11.733688   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:11.760175   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:11.760186   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:14.277185   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:19.279780   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:19.280131   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:19.318227   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:19.318383   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:19.344546   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:19.344681   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:19.359615   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:19.359693   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:19.378005   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:19.378077   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:19.388835   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:19.388903   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:19.399447   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:19.399520   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:19.409815   13130 logs.go:276] 0 containers: []
	W0314 11:12:19.409830   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:19.409888   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:19.420189   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:19.420213   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:19.420218   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:19.431247   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:19.431259   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:19.454472   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:19.454479   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:19.458675   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:19.458681   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:19.471004   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:19.471019   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:19.485533   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:19.485543   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:19.503648   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:19.503659   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:19.515264   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:19.515277   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:19.551312   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:19.551322   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:19.567636   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:19.567647   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:19.582439   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:19.582450   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:19.600729   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:19.600740   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:19.611852   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:19.611862   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:19.623307   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:19.623316   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:19.634469   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:19.634479   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:19.672203   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:19.672213   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:19.686125   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:19.686135   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:22.203863   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:27.205285   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:27.205419   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:27.217645   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:27.217736   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:27.229847   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:27.229925   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:27.241894   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:27.241964   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:27.254932   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:27.255015   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:27.267485   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:27.267563   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:27.280088   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:27.280163   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:27.291922   13130 logs.go:276] 0 containers: []
	W0314 11:12:27.291935   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:27.292001   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:27.304519   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:27.304537   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:27.304544   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:27.323403   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:27.323416   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:27.337385   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:27.337398   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:27.353840   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:27.353856   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:27.371995   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:27.372008   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:27.385489   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:27.385504   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:27.402795   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:27.402811   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:27.417868   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:27.417882   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:27.438275   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:27.438296   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:27.457136   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:27.457148   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:27.470568   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:27.470582   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:27.496823   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:27.496845   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:27.537136   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:27.537153   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:27.542559   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:27.542569   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:27.581674   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:27.581688   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:27.596537   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:27.596551   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:27.609559   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:27.609571   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:30.124187   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:35.126452   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:35.128075   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:35.167650   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:35.167788   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:35.188971   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:35.189116   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:35.204182   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:35.204253   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:35.216867   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:35.216945   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:35.227664   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:35.227729   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:35.238502   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:35.238561   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:35.249028   13130 logs.go:276] 0 containers: []
	W0314 11:12:35.249043   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:35.249102   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:35.267727   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:35.267747   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:35.267753   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:35.272161   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:35.272169   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:35.284055   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:35.284065   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:35.298625   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:35.298637   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:35.322155   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:35.322162   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:35.336059   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:35.336071   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:35.353563   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:35.353574   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:35.390919   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:35.390930   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:35.406086   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:35.406097   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:35.421136   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:35.421147   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:35.432715   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:35.432727   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:35.469560   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:35.469571   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:35.487249   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:35.487263   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:35.501698   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:35.501708   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:35.513300   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:35.513313   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:35.525391   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:35.525402   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:35.537081   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:35.537092   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:38.051474   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:43.053700   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:43.054115   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:43.093663   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:43.093782   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:43.122408   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:43.122507   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:43.135841   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:43.135915   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:43.147248   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:43.147319   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:43.161561   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:43.161635   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:43.185744   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:43.185820   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:43.199751   13130 logs.go:276] 0 containers: []
	W0314 11:12:43.199763   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:43.199826   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:43.210580   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:43.210599   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:43.210605   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:43.215136   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:43.215146   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:43.226473   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:43.226484   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:43.248364   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:43.248374   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:43.262821   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:43.262832   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:43.274587   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:43.274600   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:43.286194   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:43.286208   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:43.299701   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:43.299715   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:43.311820   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:43.311831   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:43.323552   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:43.323563   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:43.334864   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:43.334875   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:43.350221   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:43.350234   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:43.374630   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:43.374641   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:43.412821   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:43.412830   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:43.449736   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:43.449747   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:43.463779   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:43.463788   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:43.477766   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:43.477775   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:45.994563   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:50.996748   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:50.997132   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:51.026948   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:51.027088   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:51.045263   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:51.045356   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:51.059391   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:51.059479   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:51.071537   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:51.071611   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:51.082245   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:51.082317   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:51.095838   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:51.095919   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:51.107203   13130 logs.go:276] 0 containers: []
	W0314 11:12:51.107215   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:51.107277   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:51.118039   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:51.118060   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:51.118066   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:51.152944   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:51.152955   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:51.168595   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:51.168609   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:51.188443   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:51.188453   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:51.192954   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:51.192960   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:51.204562   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:51.204575   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:51.216559   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:51.216570   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:51.228257   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:51.228269   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:51.251437   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:51.251445   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:51.267238   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:51.267252   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:51.281071   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:51.281083   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:51.292144   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:51.292154   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:51.310246   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:51.310257   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:51.322105   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:51.322116   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:51.336372   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:51.336385   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:51.350040   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:51.350051   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:51.369836   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:51.369845   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:53.910470   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:58.913023   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:58.913387   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:58.949859   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:58.949995   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:58.970453   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:58.970545   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:58.984860   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:58.984924   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:58.997743   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:58.997831   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:59.008388   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:59.008457   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:59.018845   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:59.018901   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:59.028920   13130 logs.go:276] 0 containers: []
	W0314 11:12:59.028934   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:59.028994   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:59.039464   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:59.039484   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:59.039489   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:59.076646   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:59.076655   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:59.089299   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:59.089309   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:59.108031   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:59.108042   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:59.123819   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:59.123831   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:59.136014   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:59.136025   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:59.150794   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:59.150807   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:59.169856   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:59.169869   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:59.182154   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:59.182166   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:59.205786   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:59.205793   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:59.210595   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:59.210602   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:59.226624   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:59.226634   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:59.243725   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:59.243736   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:59.257961   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:59.257974   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:59.269308   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:59.269321   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:59.304148   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:59.304159   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:59.317941   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:59.317951   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:01.835186   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:06.837680   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:06.837816   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:06.850414   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:13:06.850482   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:06.864799   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:13:06.864866   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:06.879700   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:13:06.879766   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:06.890382   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:13:06.890457   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:06.900745   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:13:06.900804   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:06.911300   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:13:06.911372   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:06.925671   13130 logs.go:276] 0 containers: []
	W0314 11:13:06.925683   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:06.925742   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:06.936398   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:13:06.936414   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:06.936419   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:06.941314   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:13:06.941321   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:13:06.955318   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:13:06.955327   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:13:06.969696   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:13:06.969706   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:13:06.981053   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:13:06.981063   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:13:06.993048   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:13:06.993061   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:13:07.012037   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:13:07.012049   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:13:07.024105   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:07.024115   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:07.047983   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:07.047993   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:07.085396   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:07.085404   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:07.121340   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:13:07.121352   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:07.133200   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:13:07.133213   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:07.144465   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:13:07.144479   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:13:07.159182   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:13:07.159196   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:13:07.171183   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:13:07.171193   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:13:07.183721   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:13:07.183732   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:13:07.195896   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:13:07.195910   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:13:09.715119   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:14.717615   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:14.718008   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:14.764132   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:13:14.764263   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:14.783232   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:13:14.783327   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:14.797280   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:13:14.797360   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:14.809214   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:13:14.809292   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:14.819991   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:13:14.820060   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:14.830259   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:13:14.830329   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:14.840323   13130 logs.go:276] 0 containers: []
	W0314 11:13:14.840336   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:14.840393   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:14.851158   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:13:14.851187   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:13:14.851195   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:13:14.864936   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:13:14.864949   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:13:14.876507   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:13:14.876517   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:13:14.891372   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:13:14.891385   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:13:14.907355   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:14.907364   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:14.930586   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:14.930595   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:14.964250   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:13:14.964264   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:13:14.978401   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:13:14.978412   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:13:14.992792   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:13:14.992802   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:13:15.004531   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:13:15.004543   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:15.016847   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:15.016857   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:15.053074   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:15.053083   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:15.057580   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:13:15.057585   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:13:15.074626   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:13:15.074635   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:13:15.085957   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:13:15.085968   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:13:15.099075   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:13:15.099086   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:13:15.110708   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:13:15.110719   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:17.624318   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:22.626735   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:22.627121   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:22.664185   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:13:22.664319   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:22.687784   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:13:22.687894   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:22.702943   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:13:22.703032   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:22.716264   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:13:22.716344   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:22.731154   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:13:22.731223   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:22.743662   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:13:22.743736   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:22.753969   13130 logs.go:276] 0 containers: []
	W0314 11:13:22.753983   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:22.754042   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:22.765173   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:13:22.765191   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:22.765201   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:22.801680   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:13:22.801691   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:13:22.824729   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:13:22.824742   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:13:22.842655   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:13:22.842666   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:22.854249   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:22.854260   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:22.877335   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:13:22.877342   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:22.889605   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:13:22.889618   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:13:22.904135   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:13:22.904149   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:13:22.915209   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:13:22.915220   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:13:22.927010   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:13:22.927020   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:13:22.941600   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:13:22.941613   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:13:22.953205   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:13:22.953216   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:13:22.964790   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:22.964803   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:22.969105   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:22.969113   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:23.005361   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:13:23.005370   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:13:23.022235   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:13:23.022245   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:13:23.038360   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:13:23.038370   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:13:25.551470   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:30.554009   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:30.554319   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:30.583040   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:13:30.583181   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:30.600856   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:13:30.600950   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:30.615662   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:13:30.615743   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:30.627104   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:13:30.627180   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:30.638358   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:13:30.638422   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:30.649839   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:13:30.649903   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:30.660202   13130 logs.go:276] 0 containers: []
	W0314 11:13:30.660213   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:30.660274   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:30.671357   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:13:30.671378   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:13:30.671383   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:13:30.683153   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:13:30.683165   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:13:30.698533   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:30.698543   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:30.733405   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:13:30.733418   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:13:30.746924   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:30.746938   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:30.770739   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:30.770755   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:30.775217   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:13:30.775223   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:13:30.787876   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:13:30.787886   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:13:30.801933   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:13:30.801945   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:13:30.815425   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:13:30.815434   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:13:30.829560   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:13:30.829570   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:13:30.844297   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:13:30.844308   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:13:30.855910   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:13:30.855922   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:13:30.874265   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:13:30.874275   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:13:30.885672   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:30.885684   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:30.925916   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:13:30.925937   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:30.938337   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:13:30.938350   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:33.452911   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:38.454767   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:38.454937   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:38.470463   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:13:38.470556   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:38.485999   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:13:38.486075   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:38.497246   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:13:38.497320   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:38.511965   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:13:38.512047   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:38.522940   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:13:38.523012   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:38.534265   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:13:38.534337   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:38.544546   13130 logs.go:276] 0 containers: []
	W0314 11:13:38.544557   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:38.544615   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:38.558095   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:13:38.558113   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:38.558119   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:38.597987   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:13:38.598003   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:38.609584   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:13:38.609595   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:38.622604   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:13:38.622618   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:13:38.640140   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:38.640151   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:38.645026   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:13:38.645038   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:13:38.659102   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:13:38.659112   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:13:38.676570   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:13:38.676582   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:13:38.688473   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:13:38.688484   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:13:38.703278   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:13:38.703293   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:13:38.714918   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:38.714929   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:38.738054   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:38.738062   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:38.778132   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:13:38.778146   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:13:38.791086   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:13:38.791099   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:13:38.805044   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:13:38.805054   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:13:38.816910   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:13:38.816922   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:13:38.828978   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:13:38.828988   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:13:41.346740   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:46.349168   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:46.349282   13130 kubeadm.go:591] duration metric: took 4m4.1781365s to restartPrimaryControlPlane
	W0314 11:13:46.349358   13130 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 11:13:46.349384   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0314 11:13:47.403542   13130 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.054168375s)
	I0314 11:13:47.403595   13130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 11:13:47.409101   13130 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 11:13:47.412093   13130 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 11:13:47.414820   13130 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 11:13:47.414827   13130 kubeadm.go:156] found existing configuration files:
	
	I0314 11:13:47.414850   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/admin.conf
	I0314 11:13:47.417470   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 11:13:47.417498   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 11:13:47.420617   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/kubelet.conf
	I0314 11:13:47.423336   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 11:13:47.423360   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 11:13:47.426185   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/controller-manager.conf
	I0314 11:13:47.429581   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 11:13:47.429629   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 11:13:47.433173   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/scheduler.conf
	I0314 11:13:47.436546   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 11:13:47.436589   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 11:13:47.439733   13130 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 11:13:47.459736   13130 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0314 11:13:47.459776   13130 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 11:13:47.516239   13130 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 11:13:47.516297   13130 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 11:13:47.516343   13130 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 11:13:47.566438   13130 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 11:13:47.575208   13130 out.go:204]   - Generating certificates and keys ...
	I0314 11:13:47.575247   13130 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 11:13:47.575285   13130 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 11:13:47.575328   13130 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 11:13:47.575360   13130 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 11:13:47.575400   13130 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 11:13:47.575432   13130 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 11:13:47.575468   13130 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 11:13:47.575504   13130 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 11:13:47.575571   13130 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 11:13:47.575654   13130 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 11:13:47.575701   13130 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 11:13:47.575753   13130 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 11:13:47.601705   13130 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 11:13:47.716545   13130 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 11:13:47.778717   13130 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 11:13:47.866194   13130 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 11:13:47.900073   13130 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 11:13:47.900397   13130 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 11:13:47.900436   13130 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 11:13:48.001800   13130 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 11:13:48.005942   13130 out.go:204]   - Booting up control plane ...
	I0314 11:13:48.006023   13130 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 11:13:48.006062   13130 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 11:13:48.006093   13130 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 11:13:48.006127   13130 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 11:13:48.006199   13130 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 11:13:52.006216   13130 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001610 seconds
	I0314 11:13:52.006277   13130 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 11:13:52.009914   13130 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 11:13:52.520530   13130 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 11:13:52.520800   13130 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-636000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 11:13:53.024424   13130 kubeadm.go:309] [bootstrap-token] Using token: djvjdd.cohpa5f8p95pbnzu
	I0314 11:13:53.030964   13130 out.go:204]   - Configuring RBAC rules ...
	I0314 11:13:53.031035   13130 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 11:13:53.031084   13130 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 11:13:53.033325   13130 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 11:13:53.035717   13130 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 11:13:53.036517   13130 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 11:13:53.037377   13130 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 11:13:53.041551   13130 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 11:13:53.225106   13130 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 11:13:53.428396   13130 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 11:13:53.428844   13130 kubeadm.go:309] 
	I0314 11:13:53.428877   13130 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 11:13:53.428881   13130 kubeadm.go:309] 
	I0314 11:13:53.428919   13130 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 11:13:53.428924   13130 kubeadm.go:309] 
	I0314 11:13:53.428939   13130 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 11:13:53.428971   13130 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 11:13:53.428995   13130 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 11:13:53.428997   13130 kubeadm.go:309] 
	I0314 11:13:53.429023   13130 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 11:13:53.429027   13130 kubeadm.go:309] 
	I0314 11:13:53.429048   13130 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 11:13:53.429051   13130 kubeadm.go:309] 
	I0314 11:13:53.429079   13130 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 11:13:53.429120   13130 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 11:13:53.429163   13130 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 11:13:53.429168   13130 kubeadm.go:309] 
	I0314 11:13:53.429209   13130 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 11:13:53.429254   13130 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 11:13:53.429260   13130 kubeadm.go:309] 
	I0314 11:13:53.429302   13130 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token djvjdd.cohpa5f8p95pbnzu \
	I0314 11:13:53.429357   13130 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e5a4174d82744a5f88c6921b8e1e2cb9a0b16334ed79a2160efb286b25bc185 \
	I0314 11:13:53.429370   13130 kubeadm.go:309] 	--control-plane 
	I0314 11:13:53.429375   13130 kubeadm.go:309] 
	I0314 11:13:53.429415   13130 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 11:13:53.429419   13130 kubeadm.go:309] 
	I0314 11:13:53.429460   13130 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token djvjdd.cohpa5f8p95pbnzu \
	I0314 11:13:53.429524   13130 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e5a4174d82744a5f88c6921b8e1e2cb9a0b16334ed79a2160efb286b25bc185 
	I0314 11:13:53.429581   13130 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 11:13:53.429599   13130 cni.go:84] Creating CNI manager for ""
	I0314 11:13:53.429609   13130 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:13:53.433947   13130 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 11:13:53.442947   13130 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 11:13:53.445799   13130 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 11:13:53.451244   13130 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 11:13:53.451309   13130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 11:13:53.451567   13130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-636000 minikube.k8s.io/updated_at=2024_03_14T11_13_53_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=running-upgrade-636000 minikube.k8s.io/primary=true
	I0314 11:13:53.500327   13130 ops.go:34] apiserver oom_adj: -16
	I0314 11:13:53.500341   13130 kubeadm.go:1106] duration metric: took 49.081958ms to wait for elevateKubeSystemPrivileges
	W0314 11:13:53.500679   13130 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 11:13:53.500684   13130 kubeadm.go:393] duration metric: took 4m11.343798375s to StartCluster
	I0314 11:13:53.500699   13130 settings.go:142] acquiring lock: {Name:mk5ca7daa9f67a4c042500e8aa0b177318634dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:13:53.500851   13130 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:13:53.501408   13130 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/kubeconfig: {Name:mk22117ed76e85ca64a0d4fa77d593f7fc7d1176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:13:53.501726   13130 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:13:53.505916   13130 out.go:177] * Verifying Kubernetes components...
	I0314 11:13:53.501836   13130 config.go:182] Loaded profile config "running-upgrade-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:13:53.501914   13130 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 11:13:53.513937   13130 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-636000"
	I0314 11:13:53.513950   13130 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-636000"
	W0314 11:13:53.513953   13130 addons.go:243] addon storage-provisioner should already be in state true
	I0314 11:13:53.513967   13130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:13:53.513972   13130 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-636000"
	I0314 11:13:53.513976   13130 host.go:66] Checking if "running-upgrade-636000" exists ...
	I0314 11:13:53.514131   13130 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-636000"
	I0314 11:13:53.517860   13130 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:13:53.524022   13130 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 11:13:53.524031   13130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 11:13:53.524041   13130 sshutil.go:53] new ssh client: &{IP:localhost Port:52096 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/running-upgrade-636000/id_rsa Username:docker}
	I0314 11:13:53.525298   13130 kapi.go:59] client config for running-upgrade-636000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/client.key", CAFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1045a4630), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 11:13:53.525528   13130 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-636000"
	W0314 11:13:53.525533   13130 addons.go:243] addon default-storageclass should already be in state true
	I0314 11:13:53.525544   13130 host.go:66] Checking if "running-upgrade-636000" exists ...
	I0314 11:13:53.526300   13130 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 11:13:53.526305   13130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 11:13:53.526309   13130 sshutil.go:53] new ssh client: &{IP:localhost Port:52096 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/running-upgrade-636000/id_rsa Username:docker}
	I0314 11:13:53.609244   13130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 11:13:53.615242   13130 api_server.go:52] waiting for apiserver process to appear ...
	I0314 11:13:53.615297   13130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:13:53.620137   13130 api_server.go:72] duration metric: took 118.398459ms to wait for apiserver process to appear ...
	I0314 11:13:53.620151   13130 api_server.go:88] waiting for apiserver healthz status ...
	I0314 11:13:53.620158   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:53.661049   13130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 11:13:53.661664   13130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 11:13:58.622198   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:58.622232   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:03.622683   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:03.622716   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:08.623311   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:08.623332   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:13.623807   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:13.623866   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:18.624683   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:18.624754   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:23.626209   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:23.626243   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0314 11:14:24.018812   13130 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0314 11:14:24.022388   13130 out.go:177] * Enabled addons: storage-provisioner
	I0314 11:14:24.034383   13130 addons.go:505] duration metric: took 30.533042917s for enable addons: enabled=[storage-provisioner]
	I0314 11:14:28.627746   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:28.627795   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:33.629582   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:33.629630   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:38.631892   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:38.631931   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:43.634074   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:43.634123   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:48.636282   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:48.636326   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:53.636651   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:53.636796   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:53.655431   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:14:53.655504   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:53.667193   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:14:53.667262   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:53.678730   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:14:53.678803   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:53.690722   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:14:53.690803   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:53.702231   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:14:53.702302   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:53.713613   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:14:53.713685   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:53.724470   13130 logs.go:276] 0 containers: []
	W0314 11:14:53.724482   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:53.724541   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:53.735697   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:14:53.735719   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:53.735725   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:53.760119   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:14:53.760127   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:53.772569   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:53.772580   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:53.807183   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:53.807191   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:53.843992   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:14:53.844006   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:14:53.859455   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:14:53.859468   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:14:53.872413   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:14:53.872425   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:14:53.888189   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:14:53.888202   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:14:53.900429   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:53.900442   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:53.905001   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:14:53.905015   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:14:53.920244   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:14:53.920256   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:14:53.932652   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:14:53.932666   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:14:53.945562   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:14:53.945572   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:14:56.464646   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:01.466728   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:01.466844   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:01.478997   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:01.479075   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:01.490171   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:01.490234   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:01.501566   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:01.501630   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:01.513510   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:01.513583   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:01.527936   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:01.528010   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:01.539669   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:01.539740   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:01.550472   13130 logs.go:276] 0 containers: []
	W0314 11:15:01.550488   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:01.550548   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:01.561595   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:01.561612   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:01.561617   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:01.574549   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:01.574560   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:01.586962   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:01.586982   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:01.602488   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:01.602501   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:01.620293   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:01.620304   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:01.632188   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:01.632199   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:01.657076   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:01.657086   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:01.672287   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:01.672297   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:01.687221   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:01.687231   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:01.698866   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:01.698879   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:01.737689   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:01.737701   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:01.751867   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:01.751877   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:01.787572   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:01.787580   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:04.293775   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:09.296267   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:09.296611   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:09.330731   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:09.330859   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:09.349930   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:09.350016   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:09.363589   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:09.363666   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:09.375272   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:09.375348   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:09.386471   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:09.386541   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:09.397027   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:09.397097   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:09.407748   13130 logs.go:276] 0 containers: []
	W0314 11:15:09.407759   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:09.407828   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:09.418865   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:09.418881   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:09.418887   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:09.455690   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:09.455701   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:09.468149   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:09.468159   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:09.479984   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:09.479995   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:09.495618   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:09.495628   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:09.507706   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:09.507717   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:09.519790   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:09.519800   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:09.558300   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:09.558309   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:09.562598   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:09.562603   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:09.577103   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:09.577114   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:09.592347   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:09.592357   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:09.610401   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:09.610412   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:09.633783   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:09.633791   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:12.149170   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:17.151442   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:17.151722   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:17.179296   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:17.179418   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:17.197231   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:17.197318   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:17.210333   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:17.210409   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:17.229772   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:17.229842   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:17.241627   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:17.241704   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:17.252779   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:17.252848   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:17.262926   13130 logs.go:276] 0 containers: []
	W0314 11:15:17.262939   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:17.262998   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:17.275772   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:17.275789   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:17.275794   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:17.310356   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:17.310368   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:17.330879   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:17.330894   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:17.348961   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:17.348971   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:17.373503   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:17.373513   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:17.392636   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:17.392650   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:17.397596   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:17.397602   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:17.434465   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:17.434477   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:17.448984   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:17.448996   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:17.463628   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:17.463643   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:17.476513   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:17.476523   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:17.488133   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:17.488145   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:17.499825   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:17.499836   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:20.013769   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:25.015997   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:25.016237   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:25.041677   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:25.041798   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:25.057943   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:25.058022   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:25.071216   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:25.071278   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:25.083055   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:25.083128   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:25.094064   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:25.094140   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:25.105229   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:25.105296   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:25.118525   13130 logs.go:276] 0 containers: []
	W0314 11:15:25.118535   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:25.118588   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:25.129875   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:25.129895   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:25.129900   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:25.166069   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:25.166081   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:25.183061   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:25.183076   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:25.195383   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:25.195393   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:25.218422   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:25.218435   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:25.233947   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:25.233958   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:25.257598   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:25.257606   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:25.291723   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:25.291733   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:25.311338   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:25.311351   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:25.327928   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:25.327940   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:25.344134   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:25.344146   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:25.356556   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:25.356569   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:25.368915   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:25.368924   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:27.875435   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:32.877852   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:32.878052   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:32.901215   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:32.901307   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:32.918056   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:32.918139   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:32.931399   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:32.931462   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:32.943101   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:32.943168   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:32.953965   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:32.954042   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:32.965104   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:32.965162   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:32.975572   13130 logs.go:276] 0 containers: []
	W0314 11:15:32.975583   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:32.975642   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:32.986755   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:32.986770   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:32.986775   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:33.001334   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:33.001348   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:33.016020   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:33.016030   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:33.030119   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:33.030130   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:33.042299   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:33.042309   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:33.060767   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:33.060777   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:33.096911   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:33.096919   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:33.101843   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:33.101848   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:33.142677   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:33.142691   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:33.165280   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:33.165287   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:33.176882   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:33.176895   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:33.189920   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:33.189930   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:33.205760   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:33.205770   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:35.719944   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:40.722189   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:40.722357   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:40.735909   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:40.735995   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:40.747455   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:40.747525   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:40.758279   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:40.758374   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:40.769862   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:40.769934   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:40.781977   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:40.782049   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:40.793616   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:40.793686   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:40.804807   13130 logs.go:276] 0 containers: []
	W0314 11:15:40.804819   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:40.804879   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:40.816758   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:40.816775   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:40.816780   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:40.829506   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:40.829514   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:40.854741   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:40.854749   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:40.866828   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:40.866839   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:40.901372   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:40.901383   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:40.905911   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:40.905920   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:40.920936   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:40.920946   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:40.935969   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:40.935982   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:40.953571   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:40.953579   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:40.995810   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:40.995824   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:41.008287   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:41.008298   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:41.020665   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:41.020676   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:41.036912   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:41.036923   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:43.551317   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:48.552139   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:48.552249   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:48.564152   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:48.564226   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:48.579370   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:48.579442   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:48.590515   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:48.590584   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:48.601549   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:48.601622   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:48.612231   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:48.612306   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:48.623388   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:48.623461   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:48.641732   13130 logs.go:276] 0 containers: []
	W0314 11:15:48.641747   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:48.641804   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:48.655945   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:48.655961   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:48.655966   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:48.690375   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:48.690387   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:48.714180   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:48.714192   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:48.726524   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:48.726535   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:48.738466   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:48.738476   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:48.754681   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:48.754693   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:48.775427   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:48.775439   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:48.793066   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:48.793079   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:48.816746   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:48.816755   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:48.821410   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:48.821418   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:48.862378   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:48.862389   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:48.877018   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:48.877030   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:48.889717   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:48.889728   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:51.403419   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:56.405612   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:56.405777   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:56.425467   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:56.425556   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:56.438772   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:56.438845   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:56.449957   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:56.450030   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:56.460053   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:56.460126   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:56.470804   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:56.470880   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:56.481373   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:56.481438   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:56.498906   13130 logs.go:276] 0 containers: []
	W0314 11:15:56.498918   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:56.498976   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:56.509867   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:56.509882   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:56.509887   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:56.545474   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:56.545483   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:56.550255   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:56.550263   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:56.563882   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:56.563893   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:56.575676   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:56.575690   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:56.591513   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:56.591523   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:56.608979   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:56.608990   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:56.634323   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:56.634332   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:56.669019   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:56.669033   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:56.684495   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:56.684505   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:56.696961   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:56.696970   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:56.708860   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:56.708871   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:56.721647   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:56.721659   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:59.235599   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:04.237784   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:04.238024   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:04.263547   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:04.263663   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:04.279892   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:04.279975   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:04.292933   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:04.293007   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:04.304652   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:04.304728   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:04.315989   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:04.316068   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:04.326445   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:04.326509   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:04.337487   13130 logs.go:276] 0 containers: []
	W0314 11:16:04.337498   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:04.337563   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:04.347853   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:04.347869   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:04.347874   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:04.359646   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:04.359661   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:04.371581   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:04.371590   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:04.398915   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:04.398925   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:04.423753   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:04.423762   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:04.458604   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:04.458615   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:04.472978   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:04.472990   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:04.488141   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:04.488153   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:04.499634   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:04.499647   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:04.514914   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:04.514926   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:04.526366   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:04.526378   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:04.538336   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:04.538345   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:04.574182   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:04.574191   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:07.170060   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:12.172474   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:12.173179   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:12.197204   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:12.197296   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:12.212657   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:12.212731   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:12.224769   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:12.224842   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:12.235077   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:12.235149   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:12.245803   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:12.245866   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:12.256072   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:12.256144   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:12.270791   13130 logs.go:276] 0 containers: []
	W0314 11:16:12.270805   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:12.270862   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:12.281552   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:12.281570   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:12.281577   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:12.293908   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:12.293921   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:12.309017   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:12.309031   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:12.345260   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:12.345268   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:12.361057   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:12.361067   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:12.379958   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:12.379969   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:12.391660   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:12.391674   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:12.405448   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:12.405459   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:12.430509   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:12.430517   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:12.448981   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:12.448994   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:12.460436   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:12.460446   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:12.477388   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:12.477399   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:12.495270   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:12.495280   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:12.500018   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:12.500027   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:12.541505   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:12.541517   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:15.059651   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:20.062128   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:20.062497   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:20.099176   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:20.099324   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:20.119498   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:20.119599   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:20.135525   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:20.135608   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:20.147699   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:20.147765   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:20.158469   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:20.158540   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:20.169253   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:20.169314   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:20.179374   13130 logs.go:276] 0 containers: []
	W0314 11:16:20.179388   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:20.179451   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:20.189981   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:20.189998   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:20.190007   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:20.194959   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:20.194964   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:20.207723   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:20.207736   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:20.221563   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:20.221574   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:20.234139   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:20.234148   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:20.250200   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:20.250210   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:20.261875   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:20.261889   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:20.276985   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:20.276997   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:20.294938   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:20.294948   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:20.331323   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:20.331335   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:20.365896   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:20.365907   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:20.386967   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:20.386983   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:20.400305   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:20.400315   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:20.412530   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:20.412540   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:20.437468   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:20.437475   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:22.950205   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:27.953018   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:27.953259   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:27.978185   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:27.978287   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:27.994532   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:27.994618   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:28.007962   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:28.008039   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:28.019014   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:28.019082   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:28.029572   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:28.029644   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:28.039998   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:28.040068   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:28.049764   13130 logs.go:276] 0 containers: []
	W0314 11:16:28.049777   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:28.049832   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:28.060705   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:28.060725   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:28.060731   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:28.065257   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:28.065268   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:28.076994   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:28.077003   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:28.089057   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:28.089070   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:28.127848   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:28.127860   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:28.142872   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:28.142883   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:28.156912   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:28.156923   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:28.167944   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:28.167955   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:28.185216   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:28.185227   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:28.209022   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:28.209032   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:28.242570   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:28.242578   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:28.260399   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:28.260409   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:28.271605   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:28.271619   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:28.283071   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:28.283081   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:28.298080   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:28.298093   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:30.816953   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:35.819282   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:35.819443   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:35.831112   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:35.831184   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:35.842073   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:35.842138   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:35.853711   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:35.853801   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:35.865951   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:35.866021   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:35.876929   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:35.877002   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:35.887575   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:35.887653   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:35.898042   13130 logs.go:276] 0 containers: []
	W0314 11:16:35.898057   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:35.898122   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:35.909371   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:35.909393   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:35.909399   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:35.921184   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:35.921195   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:35.933355   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:35.933370   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:35.951364   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:35.951375   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:35.966069   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:35.966079   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:35.971092   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:35.971099   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:36.005472   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:36.005482   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:36.017222   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:36.017231   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:36.028999   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:36.029013   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:36.043881   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:36.043893   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:36.055496   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:36.055509   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:36.090869   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:36.090878   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:36.105345   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:36.105355   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:36.119447   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:36.119458   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:36.131509   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:36.131520   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:38.658927   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:43.661365   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:43.661561   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:43.689261   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:43.689381   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:43.703037   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:43.703125   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:43.715230   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:43.715306   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:43.726062   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:43.726137   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:43.736352   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:43.736426   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:43.746803   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:43.746873   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:43.757085   13130 logs.go:276] 0 containers: []
	W0314 11:16:43.757098   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:43.757168   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:43.768228   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:43.768244   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:43.768250   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:43.782491   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:43.782502   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:43.794358   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:43.794369   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:43.806405   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:43.806418   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:43.824223   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:43.824233   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:43.837337   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:43.837348   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:43.842094   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:43.842102   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:43.879444   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:43.879458   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:43.891506   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:43.891519   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:43.903522   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:43.903534   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:43.915234   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:43.915245   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:43.928923   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:43.928934   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:43.964454   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:43.964464   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:43.978583   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:43.978593   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:43.994061   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:43.994070   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:46.520675   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:51.522889   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:51.523005   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:51.534390   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:51.534462   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:51.545630   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:51.545708   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:51.557496   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:51.557581   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:51.568998   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:51.569078   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:51.580712   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:51.580807   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:51.592717   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:51.592785   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:51.603919   13130 logs.go:276] 0 containers: []
	W0314 11:16:51.603933   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:51.603996   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:51.615457   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:51.615475   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:51.615484   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:51.620360   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:51.620372   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:51.632939   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:51.632954   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:51.657963   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:51.657978   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:51.671455   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:51.671468   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:51.708887   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:51.708906   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:51.762455   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:51.762470   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:51.777527   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:51.777541   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:51.793855   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:51.793872   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:51.806434   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:51.806448   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:51.821409   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:51.821424   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:51.838716   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:51.838729   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:51.860558   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:51.860571   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:51.872799   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:51.872813   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:51.889039   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:51.889050   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:54.409509   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:59.411771   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:59.411992   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:59.437661   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:59.437752   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:59.451981   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:59.452052   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:59.463283   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:59.463353   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:59.473683   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:59.473742   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:59.484961   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:59.485033   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:59.495590   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:59.495652   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:59.506057   13130 logs.go:276] 0 containers: []
	W0314 11:16:59.506070   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:59.506126   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:59.516763   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:59.516780   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:59.516788   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:59.531001   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:59.531011   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:59.543306   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:59.543318   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:59.554923   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:59.554933   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:59.566884   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:59.566894   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:59.571933   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:59.571940   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:59.606617   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:59.606629   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:59.621250   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:59.621263   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:59.636533   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:59.636543   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:59.661154   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:59.661162   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:59.673149   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:59.673160   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:59.684672   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:59.684685   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:59.720445   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:59.720453   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:59.732837   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:59.732850   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:59.750241   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:59.750253   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:02.264269   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:07.266524   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:07.266650   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:07.278740   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:07.278813   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:07.289707   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:07.289772   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:07.300561   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:07.300640   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:07.311996   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:07.312062   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:07.322766   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:07.322834   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:07.333465   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:07.333540   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:07.343334   13130 logs.go:276] 0 containers: []
	W0314 11:17:07.343350   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:07.343418   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:07.354710   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:07.354729   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:07.354734   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:07.366752   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:07.366763   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:07.378243   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:07.378253   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:07.397873   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:07.397884   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:07.435677   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:07.435688   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:07.449118   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:07.449131   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:07.464053   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:07.464066   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:07.489739   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:07.489747   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:07.506040   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:07.506052   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:07.542517   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:07.542528   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:07.547329   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:07.547337   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:07.561971   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:07.561985   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:07.573519   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:07.573534   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:07.594215   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:07.594229   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:07.606485   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:07.606499   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:10.120315   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:15.122624   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:15.122868   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:15.146422   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:15.146544   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:15.163272   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:15.163358   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:15.176549   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:15.176625   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:15.189216   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:15.189286   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:15.200315   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:15.200384   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:15.212238   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:15.212311   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:15.222358   13130 logs.go:276] 0 containers: []
	W0314 11:17:15.222369   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:15.222428   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:15.232803   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:15.232824   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:15.232830   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:15.245043   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:15.245053   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:15.256955   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:15.256964   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:15.261301   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:15.261310   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:15.295448   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:15.295459   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:15.312786   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:15.312796   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:15.324904   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:15.324916   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:15.360700   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:15.360712   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:15.375500   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:15.375510   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:15.391065   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:15.391079   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:15.407119   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:15.407130   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:15.422242   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:15.422255   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:15.437261   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:15.437272   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:15.457334   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:15.457345   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:15.468908   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:15.468922   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:17.994113   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:22.994765   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:22.994875   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:23.007045   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:23.007117   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:23.022284   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:23.022351   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:23.038116   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:23.038188   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:23.071305   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:23.071373   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:23.088729   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:23.088802   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:23.099722   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:23.099798   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:23.110179   13130 logs.go:276] 0 containers: []
	W0314 11:17:23.110191   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:23.110250   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:23.122400   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:23.122417   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:23.122422   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:23.159439   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:23.159448   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:23.180229   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:23.180241   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:23.204469   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:23.204481   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:23.208795   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:23.208801   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:23.227571   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:23.227583   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:23.239333   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:23.239344   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:23.251463   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:23.251474   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:23.286608   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:23.286617   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:23.298104   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:23.298115   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:23.314801   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:23.314810   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:23.327845   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:23.327856   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:23.345298   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:23.345308   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:23.357963   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:23.357975   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:23.384768   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:23.384778   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:25.898861   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:30.901001   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:30.901167   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:30.917074   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:30.917154   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:30.929630   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:30.929701   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:30.944906   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:30.944973   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:30.960830   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:30.960901   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:30.971952   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:30.972020   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:30.982800   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:30.982868   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:30.992345   13130 logs.go:276] 0 containers: []
	W0314 11:17:30.992355   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:30.992414   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:31.002959   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:31.002980   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:31.002986   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:31.037420   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:31.037432   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:31.049075   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:31.049087   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:31.060177   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:31.060189   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:31.093635   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:31.093643   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:31.111283   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:31.111297   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:31.123366   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:31.123377   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:31.135371   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:31.135383   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:31.147593   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:31.147603   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:31.159711   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:31.159723   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:31.164297   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:31.164304   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:31.178454   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:31.178464   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:31.189785   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:31.189793   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:31.208487   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:31.208498   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:31.226876   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:31.226886   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:33.753588   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:38.755794   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:38.755917   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:38.767074   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:38.767157   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:38.778526   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:38.778596   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:38.789466   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:38.789538   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:38.799716   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:38.799790   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:38.810519   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:38.810592   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:38.820717   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:38.820786   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:38.831357   13130 logs.go:276] 0 containers: []
	W0314 11:17:38.831369   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:38.831427   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:38.842389   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:38.842407   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:38.842413   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:38.877046   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:38.877059   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:38.888786   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:38.888795   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:38.900824   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:38.900835   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:38.925168   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:38.925177   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:38.936825   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:38.936837   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:38.948435   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:38.948446   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:38.985543   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:38.985552   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:38.989772   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:38.989780   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:39.001959   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:39.001973   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:39.023542   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:39.023553   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:39.038215   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:39.038229   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:39.052226   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:39.052237   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:39.064483   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:39.064497   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:39.077443   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:39.077455   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:41.594214   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:46.596553   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:46.596899   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:46.631428   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:46.631569   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:46.652809   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:46.652908   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:46.666842   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:46.666928   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:46.679253   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:46.679327   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:46.689899   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:46.689967   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:46.700557   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:46.700628   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:46.710720   13130 logs.go:276] 0 containers: []
	W0314 11:17:46.710731   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:46.710792   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:46.726199   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:46.726217   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:46.726223   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:46.738844   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:46.738855   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:46.757934   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:46.757945   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:46.763341   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:46.763350   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:46.798654   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:46.798664   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:46.811431   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:46.811441   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:46.827588   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:46.827600   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:46.842657   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:46.842669   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:46.861508   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:46.861519   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:46.872966   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:46.872976   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:46.908161   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:46.908170   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:46.922137   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:46.922148   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:46.946907   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:46.946914   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:46.958245   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:46.958255   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:46.969748   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:46.969761   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:49.482942   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:54.485342   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:54.490206   13130 out.go:177] 
	W0314 11:17:54.495169   13130 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0314 11:17:54.495183   13130 out.go:239] * 
	* 
	W0314 11:17:54.496402   13130 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:17:54.506869   13130 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-636000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-14 11:17:54.613088 -0700 PDT m=+1357.044843210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-636000 -n running-upgrade-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-636000 -n running-upgrade-636000: exit status 2 (15.631046875s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-636000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-911000          | force-systemd-flag-911000 | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-600000              | force-systemd-env-600000  | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-600000           | force-systemd-env-600000  | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT | 14 Mar 24 11:07 PDT |
	| start   | -p docker-flags-378000                | docker-flags-378000       | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-911000             | force-systemd-flag-911000 | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-911000          | force-systemd-flag-911000 | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT | 14 Mar 24 11:07 PDT |
	| start   | -p cert-expiration-802000             | cert-expiration-802000    | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-378000 ssh               | docker-flags-378000       | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-378000 ssh               | docker-flags-378000       | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-378000                | docker-flags-378000       | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT | 14 Mar 24 11:07 PDT |
	| start   | -p cert-options-764000                | cert-options-764000       | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-764000 ssh               | cert-options-764000       | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-764000 -- sudo        | cert-options-764000       | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-764000                | cert-options-764000       | jenkins | v1.32.0 | 14 Mar 24 11:07 PDT | 14 Mar 24 11:07 PDT |
	| start   | -p running-upgrade-636000             | minikube                  | jenkins | v1.26.0 | 14 Mar 24 11:07 PDT | 14 Mar 24 11:09 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-636000             | running-upgrade-636000    | jenkins | v1.32.0 | 14 Mar 24 11:09 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-802000             | cert-expiration-802000    | jenkins | v1.32.0 | 14 Mar 24 11:10 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-802000             | cert-expiration-802000    | jenkins | v1.32.0 | 14 Mar 24 11:10 PDT | 14 Mar 24 11:10 PDT |
	| start   | -p kubernetes-upgrade-023000          | kubernetes-upgrade-023000 | jenkins | v1.32.0 | 14 Mar 24 11:10 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-023000          | kubernetes-upgrade-023000 | jenkins | v1.32.0 | 14 Mar 24 11:11 PDT | 14 Mar 24 11:11 PDT |
	| start   | -p kubernetes-upgrade-023000          | kubernetes-upgrade-023000 | jenkins | v1.32.0 | 14 Mar 24 11:11 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-023000          | kubernetes-upgrade-023000 | jenkins | v1.32.0 | 14 Mar 24 11:11 PDT | 14 Mar 24 11:11 PDT |
	| start   | -p stopped-upgrade-157000             | minikube                  | jenkins | v1.26.0 | 14 Mar 24 11:11 PDT | 14 Mar 24 11:12 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-157000 stop           | minikube                  | jenkins | v1.26.0 | 14 Mar 24 11:12 PDT | 14 Mar 24 11:12 PDT |
	| start   | -p stopped-upgrade-157000             | stopped-upgrade-157000    | jenkins | v1.32.0 | 14 Mar 24 11:12 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 11:12:15
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 11:12:15.411443   13262 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:12:15.411583   13262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:12:15.411587   13262 out.go:304] Setting ErrFile to fd 2...
	I0314 11:12:15.411589   13262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:12:15.411720   13262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:12:15.412758   13262 out.go:298] Setting JSON to false
	I0314 11:12:15.430321   13262 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7907,"bootTime":1710432028,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:12:15.430386   13262 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:12:15.434821   13262 out.go:177] * [stopped-upgrade-157000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:12:15.442904   13262 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:12:15.442971   13262 notify.go:220] Checking for updates...
	I0314 11:12:15.450816   13262 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:12:15.452291   13262 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:12:15.455781   13262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:12:15.458795   13262 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:12:15.461859   13262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:12:15.465076   13262 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:12:15.468745   13262 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 11:12:15.471795   13262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:12:15.475695   13262 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:12:15.482823   13262 start.go:297] selected driver: qemu2
	I0314 11:12:15.482829   13262 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52332 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0314 11:12:15.482896   13262 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:12:15.485618   13262 cni.go:84] Creating CNI manager for ""
	I0314 11:12:15.485631   13262 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:12:15.485656   13262 start.go:340] cluster config:
	{Name:stopped-upgrade-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52332 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0314 11:12:15.485705   13262 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:12:15.493836   13262 out.go:177] * Starting "stopped-upgrade-157000" primary control-plane node in "stopped-upgrade-157000" cluster
	I0314 11:12:15.497790   13262 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0314 11:12:15.497807   13262 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0314 11:12:15.497817   13262 cache.go:56] Caching tarball of preloaded images
	I0314 11:12:15.497871   13262 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:12:15.497876   13262 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0314 11:12:15.497931   13262 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/config.json ...
	I0314 11:12:15.498386   13262 start.go:360] acquireMachinesLock for stopped-upgrade-157000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:12:15.498416   13262 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "stopped-upgrade-157000"
	I0314 11:12:15.498425   13262 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:12:15.498429   13262 fix.go:54] fixHost starting: 
	I0314 11:12:15.498525   13262 fix.go:112] recreateIfNeeded on stopped-upgrade-157000: state=Stopped err=<nil>
	W0314 11:12:15.498535   13262 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:12:15.502838   13262 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-157000" ...
	I0314 11:12:14.277185   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:15.510856   13262 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52297-:22,hostfwd=tcp::52298-:2376,hostname=stopped-upgrade-157000 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/disk.qcow2
	I0314 11:12:15.558810   13262 main.go:141] libmachine: STDOUT: 
	I0314 11:12:15.558838   13262 main.go:141] libmachine: STDERR: 
	I0314 11:12:15.558844   13262 main.go:141] libmachine: Waiting for VM to start (ssh -p 52297 docker@127.0.0.1)...
	I0314 11:12:19.279780   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:19.280131   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:19.318227   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:19.318383   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:19.344546   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:19.344681   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:19.359615   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:19.359693   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:19.378005   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:19.378077   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:19.388835   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:19.388903   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:19.399447   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:19.399520   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:19.409815   13130 logs.go:276] 0 containers: []
	W0314 11:12:19.409830   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:19.409888   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:19.420189   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:19.420213   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:19.420218   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:19.431247   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:19.431259   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:19.454472   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:19.454479   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:19.458675   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:19.458681   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:19.471004   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:19.471019   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:19.485533   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:19.485543   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:19.503648   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:19.503659   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:19.515264   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:19.515277   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:19.551312   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:19.551322   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:19.567636   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:19.567647   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:19.582439   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:19.582450   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:19.600729   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:19.600740   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:19.611852   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:19.611862   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:19.623307   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:19.623316   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:19.634469   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:19.634479   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:19.672203   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:19.672213   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:19.686125   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:19.686135   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:22.203863   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:27.205285   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:27.205419   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:27.217645   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:27.217736   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:27.229847   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:27.229925   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:27.241894   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:27.241964   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:27.254932   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:27.255015   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:27.267485   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:27.267563   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:27.280088   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:27.280163   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:27.291922   13130 logs.go:276] 0 containers: []
	W0314 11:12:27.291935   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:27.292001   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:27.304519   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:27.304537   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:27.304544   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:27.323403   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:27.323416   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:27.337385   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:27.337398   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:27.353840   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:27.353856   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:27.371995   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:27.372008   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:27.385489   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:27.385504   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:27.402795   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:27.402811   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:27.417868   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:27.417882   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:27.438275   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:27.438296   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:27.457136   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:27.457148   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:27.470568   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:27.470582   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:27.496823   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:27.496845   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:27.537136   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:27.537153   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:27.542559   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:27.542569   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:27.581674   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:27.581688   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:27.596537   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:27.596551   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:27.609559   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:27.609571   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:30.124187   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:35.369452   13262 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/config.json ...
	I0314 11:12:35.369661   13262 machine.go:94] provisionDockerMachine start ...
	I0314 11:12:35.369707   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.369834   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.369838   13262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 11:12:35.439413   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 11:12:35.439427   13262 buildroot.go:166] provisioning hostname "stopped-upgrade-157000"
	I0314 11:12:35.439494   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.439614   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.439624   13262 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-157000 && echo "stopped-upgrade-157000" | sudo tee /etc/hostname
	I0314 11:12:35.513383   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-157000
	
	I0314 11:12:35.513449   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.513576   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.513586   13262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-157000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-157000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-157000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 11:12:35.585623   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 11:12:35.585637   13262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18384-10823/.minikube CaCertPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18384-10823/.minikube}
	I0314 11:12:35.585647   13262 buildroot.go:174] setting up certificates
	I0314 11:12:35.585652   13262 provision.go:84] configureAuth start
	I0314 11:12:35.585660   13262 provision.go:143] copyHostCerts
	I0314 11:12:35.585745   13262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.pem, removing ...
	I0314 11:12:35.585752   13262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.pem
	I0314 11:12:35.586538   13262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.pem (1082 bytes)
	I0314 11:12:35.586694   13262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18384-10823/.minikube/cert.pem, removing ...
	I0314 11:12:35.586698   13262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18384-10823/.minikube/cert.pem
	I0314 11:12:35.586747   13262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18384-10823/.minikube/cert.pem (1123 bytes)
	I0314 11:12:35.586851   13262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18384-10823/.minikube/key.pem, removing ...
	I0314 11:12:35.586854   13262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18384-10823/.minikube/key.pem
	I0314 11:12:35.586896   13262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18384-10823/.minikube/key.pem (1675 bytes)
	I0314 11:12:35.586974   13262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-157000 san=[127.0.0.1 localhost minikube stopped-upgrade-157000]
	I0314 11:12:35.701532   13262 provision.go:177] copyRemoteCerts
	I0314 11:12:35.701568   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 11:12:35.701577   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	I0314 11:12:35.738247   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0314 11:12:35.745299   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 11:12:35.752080   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 11:12:35.759024   13262 provision.go:87] duration metric: took 173.365917ms to configureAuth
	I0314 11:12:35.759034   13262 buildroot.go:189] setting minikube options for container-runtime
	I0314 11:12:35.759148   13262 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:12:35.759189   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.759289   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.759294   13262 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 11:12:35.827330   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 11:12:35.827342   13262 buildroot.go:70] root file system type: tmpfs
	I0314 11:12:35.827392   13262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 11:12:35.827436   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.827535   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.827567   13262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 11:12:35.897884   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 11:12:35.897940   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.898062   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.898070   13262 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 11:12:36.249445   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 11:12:36.249466   13262 machine.go:97] duration metric: took 879.816ms to provisionDockerMachine
	I0314 11:12:36.249477   13262 start.go:293] postStartSetup for "stopped-upgrade-157000" (driver="qemu2")
	I0314 11:12:36.249483   13262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 11:12:36.249559   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 11:12:36.249570   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	I0314 11:12:36.286391   13262 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 11:12:36.287687   13262 info.go:137] Remote host: Buildroot 2021.02.12
	I0314 11:12:36.287695   13262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18384-10823/.minikube/addons for local assets ...
	I0314 11:12:36.287763   13262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18384-10823/.minikube/files for local assets ...
	I0314 11:12:36.287882   13262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem -> 112382.pem in /etc/ssl/certs
	I0314 11:12:36.288012   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 11:12:36.290701   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem --> /etc/ssl/certs/112382.pem (1708 bytes)
	I0314 11:12:36.297579   13262 start.go:296] duration metric: took 48.098333ms for postStartSetup
	I0314 11:12:36.297593   13262 fix.go:56] duration metric: took 20.799555583s for fixHost
	I0314 11:12:36.297633   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:36.297733   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:36.297738   13262 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 11:12:36.367100   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710439956.736506504
	
	I0314 11:12:36.367109   13262 fix.go:216] guest clock: 1710439956.736506504
	I0314 11:12:36.367113   13262 fix.go:229] Guest: 2024-03-14 11:12:36.736506504 -0700 PDT Remote: 2024-03-14 11:12:36.297594 -0700 PDT m=+20.917985459 (delta=438.912504ms)
	I0314 11:12:36.367124   13262 fix.go:200] guest clock delta is within tolerance: 438.912504ms
	I0314 11:12:36.367127   13262 start.go:83] releasing machines lock for "stopped-upgrade-157000", held for 20.869099083s
	I0314 11:12:36.367196   13262 ssh_runner.go:195] Run: cat /version.json
	I0314 11:12:36.367198   13262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 11:12:36.367205   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	I0314 11:12:36.367212   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	W0314 11:12:36.367811   13262 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52297: connect: connection refused
	I0314 11:12:36.367835   13262 retry.go:31] will retry after 249.00188ms: dial tcp [::1]:52297: connect: connection refused
	W0314 11:12:36.401338   13262 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0314 11:12:36.401390   13262 ssh_runner.go:195] Run: systemctl --version
	I0314 11:12:36.403060   13262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 11:12:36.404798   13262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 11:12:36.404822   13262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0314 11:12:36.407512   13262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0314 11:12:36.412583   13262 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 11:12:36.412592   13262 start.go:494] detecting cgroup driver to use...
	I0314 11:12:36.412660   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 11:12:36.419547   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0314 11:12:36.423237   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 11:12:36.426483   13262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 11:12:36.426508   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 11:12:36.429924   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 11:12:36.432853   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 11:12:36.435575   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 11:12:36.438834   13262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 11:12:36.442299   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 11:12:36.445506   13262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 11:12:36.448079   13262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 11:12:36.450997   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:36.514561   13262 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 11:12:36.523004   13262 start.go:494] detecting cgroup driver to use...
	I0314 11:12:36.523094   13262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 11:12:36.531400   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 11:12:36.536015   13262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 11:12:36.549944   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 11:12:36.555968   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 11:12:36.562789   13262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 11:12:36.601990   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 11:12:36.606825   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 11:12:36.611959   13262 ssh_runner.go:195] Run: which cri-dockerd
	I0314 11:12:36.613211   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 11:12:36.615601   13262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 11:12:36.620675   13262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 11:12:36.688261   13262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 11:12:36.751298   13262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 11:12:36.751369   13262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 11:12:36.758638   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:36.827593   13262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 11:12:37.949413   13262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.121824583s)
	I0314 11:12:37.949482   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 11:12:37.954513   13262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0314 11:12:37.960682   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 11:12:37.965292   13262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 11:12:38.029504   13262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 11:12:38.090803   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:38.154234   13262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 11:12:38.160336   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 11:12:38.164887   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:38.231058   13262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 11:12:38.276313   13262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 11:12:38.276388   13262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 11:12:38.278908   13262 start.go:562] Will wait 60s for crictl version
	I0314 11:12:38.278970   13262 ssh_runner.go:195] Run: which crictl
	I0314 11:12:38.280277   13262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 11:12:38.295680   13262 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0314 11:12:38.295743   13262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 11:12:38.319713   13262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 11:12:35.126452   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:35.128075   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:35.167650   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:35.167788   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:35.188971   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:35.189116   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:35.204182   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:35.204253   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:35.216867   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:35.216945   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:35.227664   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:35.227729   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:35.238502   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:35.238561   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:35.249028   13130 logs.go:276] 0 containers: []
	W0314 11:12:35.249043   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:35.249102   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:35.267727   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:35.267747   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:35.267753   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:35.272161   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:35.272169   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:35.284055   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:35.284065   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:35.298625   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:35.298637   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:35.322155   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:35.322162   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:35.336059   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:35.336071   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:35.353563   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:35.353574   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:35.390919   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:35.390930   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:35.406086   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:35.406097   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:35.421136   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:35.421147   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:35.432715   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:35.432727   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:35.469560   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:35.469571   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:35.487249   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:35.487263   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:35.501698   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:35.501708   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:35.513300   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:35.513313   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:35.525391   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:35.525402   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:35.537081   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:35.537092   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:38.051474   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:38.339982   13262 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0314 11:12:38.340047   13262 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0314 11:12:38.341361   13262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 11:12:38.345356   13262 kubeadm.go:877] updating cluster {Name:stopped-upgrade-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52332 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0314 11:12:38.345403   13262 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0314 11:12:38.345446   13262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 11:12:38.356007   13262 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 11:12:38.356017   13262 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0314 11:12:38.356062   13262 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 11:12:38.359304   13262 ssh_runner.go:195] Run: which lz4
	I0314 11:12:38.360641   13262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 11:12:38.361891   13262 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 11:12:38.361906   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0314 11:12:39.087859   13262 docker.go:649] duration metric: took 727.259542ms to copy over tarball
	I0314 11:12:39.087918   13262 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 11:12:43.053700   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:43.054115   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:43.093663   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:43.093782   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:43.122408   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:43.122507   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:43.135841   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:43.135915   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:43.147248   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:43.147319   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:43.161561   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:43.161635   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:43.185744   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:43.185820   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:43.199751   13130 logs.go:276] 0 containers: []
	W0314 11:12:43.199763   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:43.199826   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:43.210580   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:43.210599   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:43.210605   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:43.215136   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:43.215146   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:43.226473   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:43.226484   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:43.248364   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:43.248374   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:43.262821   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:43.262832   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:43.274587   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:43.274600   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:43.286194   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:43.286208   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:43.299701   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:43.299715   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:43.311820   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:43.311831   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:43.323552   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:43.323563   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:43.334864   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:43.334875   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:43.350221   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:43.350234   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:43.374630   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:43.374641   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:43.412821   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:43.412830   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:43.449736   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:43.449747   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:43.463779   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:43.463788   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:43.477766   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:43.477775   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:40.412681   13262 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.324763333s)
	I0314 11:12:40.423763   13262 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 11:12:40.442045   13262 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 11:12:40.445602   13262 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0314 11:12:40.451056   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:40.508887   13262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 11:12:42.069086   13262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.560210125s)
	I0314 11:12:42.069184   13262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 11:12:42.084820   13262 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 11:12:42.084830   13262 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0314 11:12:42.084836   13262 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 11:12:42.093918   13262 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0314 11:12:42.093973   13262 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:12:42.094020   13262 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:12:42.094086   13262 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:12:42.094331   13262 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0314 11:12:42.094435   13262 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:12:42.094769   13262 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:12:42.094867   13262 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:12:42.103050   13262 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:12:42.104352   13262 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0314 11:12:42.104699   13262 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:12:42.104765   13262 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:12:42.104871   13262 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:12:42.104882   13262 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0314 11:12:42.104918   13262 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:12:42.104944   13262 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:12:44.082498   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:12:44.114834   13262 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0314 11:12:44.114882   13262 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:12:44.114982   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:12:44.132783   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0314 11:12:44.148046   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:12:44.162929   13262 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0314 11:12:44.162951   13262 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:12:44.163011   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:12:44.174611   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0314 11:12:44.193295   13262 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0314 11:12:44.193424   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:12:44.203974   13262 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0314 11:12:44.203992   13262 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:12:44.204046   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:12:44.213673   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0314 11:12:44.213785   13262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0314 11:12:44.216074   13262 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0314 11:12:44.216089   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0314 11:12:44.220142   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:12:44.222432   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0314 11:12:44.229559   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0314 11:12:44.242350   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:12:44.249328   13262 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0314 11:12:44.249357   13262 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:12:44.249414   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:12:44.265678   13262 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0314 11:12:44.265693   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0314 11:12:44.277161   13262 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0314 11:12:44.277184   13262 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0314 11:12:44.277195   13262 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0314 11:12:44.277205   13262 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0314 11:12:44.277239   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0314 11:12:44.277240   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0314 11:12:44.277282   13262 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0314 11:12:44.277290   13262 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:12:44.277307   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:12:44.284264   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0314 11:12:44.335016   13262 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0314 11:12:44.335067   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0314 11:12:44.335087   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0314 11:12:44.335108   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0314 11:12:44.335189   13262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0314 11:12:44.335189   13262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0314 11:12:44.336695   13262 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0314 11:12:44.336700   13262 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0314 11:12:44.336709   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0314 11:12:44.336708   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0314 11:12:44.362828   13262 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0314 11:12:44.362843   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0314 11:12:44.417810   13262 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0314 11:12:44.531804   13262 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0314 11:12:44.531819   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0314 11:12:44.551269   13262 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0314 11:12:44.551380   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:12:44.674427   13262 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0314 11:12:44.674453   13262 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0314 11:12:44.674471   13262 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:12:44.674529   13262 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:12:44.688639   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 11:12:44.688758   13262 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 11:12:44.690179   13262 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0314 11:12:44.690197   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0314 11:12:44.714937   13262 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 11:12:44.714953   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0314 11:12:44.948541   13262 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 11:12:44.948580   13262 cache_images.go:92] duration metric: took 2.863791375s to LoadCachedImages
	W0314 11:12:44.948616   13262 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0314 11:12:44.948622   13262 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0314 11:12:44.948674   13262 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-157000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 11:12:44.948733   13262 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 11:12:44.963025   13262 cni.go:84] Creating CNI manager for ""
	I0314 11:12:44.963039   13262 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:12:44.963046   13262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 11:12:44.963054   13262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-157000 NodeName:stopped-upgrade-157000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 11:12:44.963120   13262 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-157000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 11:12:44.963172   13262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0314 11:12:44.966640   13262 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 11:12:44.966668   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 11:12:44.969899   13262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0314 11:12:44.974698   13262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 11:12:44.979552   13262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0314 11:12:44.984970   13262 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0314 11:12:44.986180   13262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 11:12:44.989620   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:45.051111   13262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 11:12:45.062745   13262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000 for IP: 10.0.2.15
	I0314 11:12:45.062757   13262 certs.go:194] generating shared ca certs ...
	I0314 11:12:45.062766   13262 certs.go:226] acquiring lock for ca certs: {Name:mk6a5389e049f4ab73da9372eeaf63d358eca92f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:12:45.062927   13262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.key
	I0314 11:12:45.063190   13262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/proxy-client-ca.key
	I0314 11:12:45.063198   13262 certs.go:256] generating profile certs ...
	I0314 11:12:45.063478   13262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/client.key
	I0314 11:12:45.063520   13262 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key.e0ee09d6
	I0314 11:12:45.063534   13262 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt.e0ee09d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0314 11:12:45.204279   13262 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt.e0ee09d6 ...
	I0314 11:12:45.204296   13262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt.e0ee09d6: {Name:mkf5b13511b68d86a378697f3d5619901b1032a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:12:45.204606   13262 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key.e0ee09d6 ...
	I0314 11:12:45.204611   13262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key.e0ee09d6: {Name:mk1d1811403924069940736f68029fcffb7d246e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:12:45.204732   13262 certs.go:381] copying /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt.e0ee09d6 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt
	I0314 11:12:45.204917   13262 certs.go:385] copying /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key.e0ee09d6 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key
	I0314 11:12:45.205269   13262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/proxy-client.key
	I0314 11:12:45.205467   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/11238.pem (1338 bytes)
	W0314 11:12:45.205640   13262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/11238_empty.pem, impossibly tiny 0 bytes
	I0314 11:12:45.205646   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 11:12:45.205674   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem (1082 bytes)
	I0314 11:12:45.205708   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem (1123 bytes)
	I0314 11:12:45.205733   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/key.pem (1675 bytes)
	I0314 11:12:45.205788   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem (1708 bytes)
	I0314 11:12:45.206188   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 11:12:45.213097   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 11:12:45.219714   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 11:12:45.226935   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 11:12:45.234414   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 11:12:45.240950   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 11:12:45.247480   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 11:12:45.254893   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 11:12:45.261461   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 11:12:45.267913   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/11238.pem --> /usr/share/ca-certificates/11238.pem (1338 bytes)
	I0314 11:12:45.274746   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem --> /usr/share/ca-certificates/112382.pem (1708 bytes)
	I0314 11:12:45.281925   13262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 11:12:45.287347   13262 ssh_runner.go:195] Run: openssl version
	I0314 11:12:45.289563   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 11:12:45.292382   13262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 11:12:45.293826   13262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:09 /usr/share/ca-certificates/minikubeCA.pem
	I0314 11:12:45.293843   13262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 11:12:45.295800   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 11:12:45.299111   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11238.pem && ln -fs /usr/share/ca-certificates/11238.pem /etc/ssl/certs/11238.pem"
	I0314 11:12:45.302550   13262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11238.pem
	I0314 11:12:45.303986   13262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:57 /usr/share/ca-certificates/11238.pem
	I0314 11:12:45.304006   13262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11238.pem
	I0314 11:12:45.305776   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11238.pem /etc/ssl/certs/51391683.0"
	I0314 11:12:45.308707   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112382.pem && ln -fs /usr/share/ca-certificates/112382.pem /etc/ssl/certs/112382.pem"
	I0314 11:12:45.311602   13262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112382.pem
	I0314 11:12:45.313149   13262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:57 /usr/share/ca-certificates/112382.pem
	I0314 11:12:45.313172   13262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112382.pem
	I0314 11:12:45.314921   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112382.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 11:12:45.319018   13262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 11:12:45.320589   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 11:12:45.323208   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 11:12:45.325652   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 11:12:45.327916   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 11:12:45.329879   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 11:12:45.331684   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 11:12:45.333563   13262 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52332 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0314 11:12:45.333630   13262 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 11:12:45.344260   13262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 11:12:45.347428   13262 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 11:12:45.347434   13262 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 11:12:45.347437   13262 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 11:12:45.347466   13262 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 11:12:45.350273   13262 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 11:12:45.351021   13262 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-157000" does not appear in /Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:12:45.351131   13262 kubeconfig.go:62] /Users/jenkins/minikube-integration/18384-10823/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-157000" cluster setting kubeconfig missing "stopped-upgrade-157000" context setting]
	I0314 11:12:45.351331   13262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/kubeconfig: {Name:mk22117ed76e85ca64a0d4fa77d593f7fc7d1176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:12:45.351758   13262 kapi.go:59] client config for stopped-upgrade-157000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/client.key", CAFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105dd8630), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 11:12:45.352374   13262 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 11:12:45.355123   13262 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-157000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0314 11:12:45.355130   13262 kubeadm.go:1153] stopping kube-system containers ...
	I0314 11:12:45.355166   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 11:12:45.366347   13262 docker.go:483] Stopping containers: [82be34d06648 8e89db56a692 727ab0ab8602 7a8b7168210f c2b3b8dcaef6 425c1f709af1 aaf4ccdffb9c d8f4cbb7cd6a]
	I0314 11:12:45.366421   13262 ssh_runner.go:195] Run: docker stop 82be34d06648 8e89db56a692 727ab0ab8602 7a8b7168210f c2b3b8dcaef6 425c1f709af1 aaf4ccdffb9c d8f4cbb7cd6a
	I0314 11:12:45.377564   13262 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 11:12:45.382920   13262 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 11:12:45.386261   13262 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 11:12:45.386266   13262 kubeadm.go:156] found existing configuration files:
	
	I0314 11:12:45.386289   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/admin.conf
	I0314 11:12:45.389281   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 11:12:45.389308   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 11:12:45.391741   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/kubelet.conf
	I0314 11:12:45.394655   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 11:12:45.394678   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 11:12:45.397767   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/controller-manager.conf
	I0314 11:12:45.400205   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 11:12:45.400223   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 11:12:45.402900   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/scheduler.conf
	I0314 11:12:45.405899   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 11:12:45.405922   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 11:12:45.408693   13262 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 11:12:45.994563   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:45.411381   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:12:45.443204   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:12:45.871194   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:12:45.990973   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:12:46.021664   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:12:46.049700   13262 api_server.go:52] waiting for apiserver process to appear ...
	I0314 11:12:46.049785   13262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:12:46.551923   13262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:12:47.050159   13262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:12:47.054260   13262 api_server.go:72] duration metric: took 1.004579708s to wait for apiserver process to appear ...
	I0314 11:12:47.054270   13262 api_server.go:88] waiting for apiserver healthz status ...
	I0314 11:12:47.054278   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:50.996748   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:50.997132   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:51.026948   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:51.027088   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:51.045263   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:51.045356   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:51.059391   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:51.059479   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:51.071537   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:51.071611   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:51.082245   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:51.082317   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:51.095838   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:51.095919   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:51.107203   13130 logs.go:276] 0 containers: []
	W0314 11:12:51.107215   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:51.107277   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:51.118039   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:51.118060   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:51.118066   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:51.152944   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:51.152955   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:51.168595   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:51.168609   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:51.188443   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:51.188453   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:51.192954   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:51.192960   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:51.204562   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:51.204575   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:51.216559   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:51.216570   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:12:51.228257   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:51.228269   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:51.251437   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:51.251445   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:51.267238   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:51.267252   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:51.281071   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:51.281083   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:51.292144   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:51.292154   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:51.310246   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:51.310257   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:51.322105   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:51.322116   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:51.336372   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:51.336385   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:51.350040   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:51.350051   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:51.369836   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:51.369845   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:53.910470   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:52.056429   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:52.056531   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:58.913023   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:58.913387   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:12:58.949859   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:12:58.949995   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:12:58.970453   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:12:58.970545   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:12:58.984860   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:12:58.984924   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:12:58.997743   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:12:58.997831   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:12:59.008388   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:12:59.008457   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:12:57.057406   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:57.057491   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:59.018845   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:12:59.018901   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:12:59.028920   13130 logs.go:276] 0 containers: []
	W0314 11:12:59.028934   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:12:59.028994   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:12:59.039464   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:12:59.039484   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:12:59.039489   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:12:59.076646   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:12:59.076655   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:12:59.089299   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:12:59.089309   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:12:59.108031   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:12:59.108042   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:12:59.123819   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:12:59.123831   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:12:59.136014   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:12:59.136025   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:12:59.150794   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:12:59.150807   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:12:59.169856   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:12:59.169869   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:12:59.182154   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:12:59.182166   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:12:59.205786   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:12:59.205793   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:12:59.210595   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:12:59.210602   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:12:59.226624   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:12:59.226634   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:12:59.243725   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:12:59.243736   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:12:59.257961   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:12:59.257974   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:12:59.269308   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:12:59.269321   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:12:59.304148   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:12:59.304159   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:12:59.317941   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:12:59.317951   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:01.835186   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:02.058300   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:02.058369   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:06.837680   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:06.837816   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:06.850414   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:13:06.850482   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:06.864799   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:13:06.864866   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:06.879700   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:13:06.879766   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:06.890382   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:13:06.890457   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:06.900745   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:13:06.900804   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:06.911300   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:13:06.911372   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:06.925671   13130 logs.go:276] 0 containers: []
	W0314 11:13:06.925683   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:06.925742   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:06.936398   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:13:06.936414   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:06.936419   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:06.941314   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:13:06.941321   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:13:06.955318   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:13:06.955327   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:13:06.969696   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:13:06.969706   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:13:06.981053   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:13:06.981063   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:13:06.993048   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:13:06.993061   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:13:07.012037   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:13:07.012049   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:13:07.024105   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:07.024115   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:07.047983   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:07.047993   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:07.085396   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:07.085404   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:07.121340   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:13:07.121352   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:07.133200   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:13:07.133213   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:07.144465   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:13:07.144479   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:13:07.159182   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:13:07.159196   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:13:07.171183   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:13:07.171193   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:13:07.183721   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:13:07.183732   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:13:07.195896   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:13:07.195910   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:13:07.059744   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:07.059760   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:09.715119   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:12.060851   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:12.060930   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:14.717615   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:14.718008   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:14.764132   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:13:14.764263   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:14.783232   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:13:14.783327   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:14.797280   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:13:14.797360   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:14.809214   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:13:14.809292   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:14.819991   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:13:14.820060   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:14.830259   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:13:14.830329   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:14.840323   13130 logs.go:276] 0 containers: []
	W0314 11:13:14.840336   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:14.840393   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:14.851158   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:13:14.851187   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:13:14.851195   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:13:14.864936   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:13:14.864949   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:13:14.876507   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:13:14.876517   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:13:14.891372   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:13:14.891385   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:13:14.907355   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:14.907364   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:14.930586   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:14.930595   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:14.964250   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:13:14.964264   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:13:14.978401   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:13:14.978412   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:13:14.992792   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:13:14.992802   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:13:15.004531   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:13:15.004543   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:15.016847   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:15.016857   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:15.053074   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:15.053083   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:15.057580   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:13:15.057585   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:13:15.074626   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:13:15.074635   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:13:15.085957   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:13:15.085968   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:13:15.099075   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:13:15.099086   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:13:15.110708   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:13:15.110719   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:17.624318   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:17.063088   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:17.063166   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:22.626735   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:22.627121   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:22.664185   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:13:22.664319   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:22.687784   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:13:22.687894   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:22.702943   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:13:22.703032   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:22.716264   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:13:22.716344   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:22.731154   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:13:22.731223   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:22.743662   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:13:22.743736   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:22.753969   13130 logs.go:276] 0 containers: []
	W0314 11:13:22.753983   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:22.754042   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:22.765173   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:13:22.765191   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:22.765201   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:22.801680   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:13:22.801691   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:13:22.824729   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:13:22.824742   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:13:22.842655   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:13:22.842666   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:22.854249   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:22.854260   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:22.877335   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:13:22.877342   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:22.889605   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:13:22.889618   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:13:22.904135   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:13:22.904149   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:13:22.915209   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:13:22.915220   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:13:22.927010   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:13:22.927020   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:13:22.941600   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:13:22.941613   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:13:22.953205   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:13:22.953216   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:13:22.964790   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:22.964803   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:22.969105   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:22.969113   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:23.005361   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:13:23.005370   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:13:23.022235   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:13:23.022245   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:13:23.038360   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:13:23.038370   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:13:22.064713   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:22.064779   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:25.551470   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:27.065663   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:27.065738   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:30.554009   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:30.554319   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:30.583040   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:13:30.583181   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:30.600856   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:13:30.600950   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:30.615662   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:13:30.615743   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:30.627104   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:13:30.627180   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:30.638358   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:13:30.638422   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:30.649839   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:13:30.649903   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:30.660202   13130 logs.go:276] 0 containers: []
	W0314 11:13:30.660213   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:30.660274   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:30.671357   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:13:30.671378   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:13:30.671383   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:13:30.683153   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:13:30.683165   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:13:30.698533   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:30.698543   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:30.733405   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:13:30.733418   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:13:30.746924   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:30.746938   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:30.770739   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:30.770755   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:30.775217   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:13:30.775223   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:13:30.787876   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:13:30.787886   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:13:30.801933   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:13:30.801945   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:13:30.815425   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:13:30.815434   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:13:30.829560   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:13:30.829570   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:13:30.844297   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:13:30.844308   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:13:30.855910   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:13:30.855922   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:13:30.874265   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:13:30.874275   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:13:30.885672   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:30.885684   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:30.925916   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:13:30.925937   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:30.938337   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:13:30.938350   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:33.452911   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:32.068142   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:32.068186   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:38.454767   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:38.454937   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:38.470463   13130 logs.go:276] 2 containers: [bf97ed5e8eab ecd9237272db]
	I0314 11:13:38.470556   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:38.485999   13130 logs.go:276] 2 containers: [bf3b24ba54ce ffda7ab9d6f7]
	I0314 11:13:38.486075   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:38.497246   13130 logs.go:276] 1 containers: [34be9399e0e3]
	I0314 11:13:38.497320   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:38.511965   13130 logs.go:276] 2 containers: [47b861e30431 32036b13d627]
	I0314 11:13:38.512047   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:38.522940   13130 logs.go:276] 1 containers: [af01c9fe94bd]
	I0314 11:13:38.523012   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:38.534265   13130 logs.go:276] 2 containers: [b648808b453b 71a0c47a9be2]
	I0314 11:13:38.534337   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:38.544546   13130 logs.go:276] 0 containers: []
	W0314 11:13:38.544557   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:38.544615   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:38.558095   13130 logs.go:276] 2 containers: [819dad21cc40 5b39a3bc91a7]
	I0314 11:13:38.558113   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:38.558119   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:38.597987   13130 logs.go:123] Gathering logs for storage-provisioner [5b39a3bc91a7] ...
	I0314 11:13:38.598003   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5b39a3bc91a7"
	I0314 11:13:38.609584   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:13:38.609595   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:38.622604   13130 logs.go:123] Gathering logs for kube-controller-manager [b648808b453b] ...
	I0314 11:13:38.622618   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b648808b453b"
	I0314 11:13:38.640140   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:38.640151   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:38.645026   13130 logs.go:123] Gathering logs for kube-apiserver [bf97ed5e8eab] ...
	I0314 11:13:38.645038   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf97ed5e8eab"
	I0314 11:13:38.659102   13130 logs.go:123] Gathering logs for etcd [bf3b24ba54ce] ...
	I0314 11:13:38.659112   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bf3b24ba54ce"
	I0314 11:13:38.676570   13130 logs.go:123] Gathering logs for kube-scheduler [47b861e30431] ...
	I0314 11:13:38.676582   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47b861e30431"
	I0314 11:13:38.688473   13130 logs.go:123] Gathering logs for kube-scheduler [32036b13d627] ...
	I0314 11:13:38.688484   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32036b13d627"
	I0314 11:13:38.703278   13130 logs.go:123] Gathering logs for kube-controller-manager [71a0c47a9be2] ...
	I0314 11:13:38.703293   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 71a0c47a9be2"
	I0314 11:13:38.714918   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:38.714929   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:38.738054   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:38.738062   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:38.778132   13130 logs.go:123] Gathering logs for kube-apiserver [ecd9237272db] ...
	I0314 11:13:38.778146   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ecd9237272db"
	I0314 11:13:38.791086   13130 logs.go:123] Gathering logs for etcd [ffda7ab9d6f7] ...
	I0314 11:13:38.791099   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffda7ab9d6f7"
	I0314 11:13:38.805044   13130 logs.go:123] Gathering logs for coredns [34be9399e0e3] ...
	I0314 11:13:38.805054   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34be9399e0e3"
	I0314 11:13:38.816910   13130 logs.go:123] Gathering logs for kube-proxy [af01c9fe94bd] ...
	I0314 11:13:38.816922   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af01c9fe94bd"
	I0314 11:13:38.828978   13130 logs.go:123] Gathering logs for storage-provisioner [819dad21cc40] ...
	I0314 11:13:38.828988   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 819dad21cc40"
	I0314 11:13:37.070406   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:37.070454   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:41.346740   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:42.072684   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:42.072727   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:46.349168   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:46.349282   13130 kubeadm.go:591] duration metric: took 4m4.1781365s to restartPrimaryControlPlane
	W0314 11:13:46.349358   13130 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 11:13:46.349384   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0314 11:13:47.403542   13130 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.054168375s)
	I0314 11:13:47.403595   13130 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 11:13:47.409101   13130 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 11:13:47.412093   13130 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 11:13:47.414820   13130 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 11:13:47.414827   13130 kubeadm.go:156] found existing configuration files:
	
	I0314 11:13:47.414850   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/admin.conf
	I0314 11:13:47.417470   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 11:13:47.417498   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 11:13:47.420617   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/kubelet.conf
	I0314 11:13:47.423336   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 11:13:47.423360   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 11:13:47.426185   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/controller-manager.conf
	I0314 11:13:47.429581   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 11:13:47.429629   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 11:13:47.433173   13130 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/scheduler.conf
	I0314 11:13:47.436546   13130 kubeadm.go:162] "https://control-plane.minikube.internal:52128" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52128 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 11:13:47.436589   13130 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 11:13:47.439733   13130 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 11:13:47.459736   13130 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0314 11:13:47.459776   13130 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 11:13:47.516239   13130 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 11:13:47.516297   13130 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 11:13:47.516343   13130 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 11:13:47.566438   13130 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 11:13:47.575208   13130 out.go:204]   - Generating certificates and keys ...
	I0314 11:13:47.575247   13130 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 11:13:47.575285   13130 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 11:13:47.575328   13130 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 11:13:47.575360   13130 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 11:13:47.575400   13130 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 11:13:47.575432   13130 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 11:13:47.575468   13130 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 11:13:47.575504   13130 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 11:13:47.575571   13130 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 11:13:47.575654   13130 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 11:13:47.575701   13130 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 11:13:47.575753   13130 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 11:13:47.601705   13130 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 11:13:47.716545   13130 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 11:13:47.778717   13130 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 11:13:47.866194   13130 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 11:13:47.900073   13130 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 11:13:47.900397   13130 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 11:13:47.900436   13130 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 11:13:48.001800   13130 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 11:13:48.005942   13130 out.go:204]   - Booting up control plane ...
	I0314 11:13:48.006023   13130 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 11:13:48.006062   13130 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 11:13:48.006093   13130 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 11:13:48.006127   13130 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 11:13:48.006199   13130 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 11:13:47.074845   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:47.074969   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:47.086647   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:13:47.086723   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:47.098698   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:13:47.098799   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:47.109192   13262 logs.go:276] 0 containers: []
	W0314 11:13:47.109206   13262 logs.go:278] No container was found matching "coredns"
	I0314 11:13:47.109274   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:47.120041   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:13:47.120123   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:47.130670   13262 logs.go:276] 0 containers: []
	W0314 11:13:47.130683   13262 logs.go:278] No container was found matching "kube-proxy"
	I0314 11:13:47.130750   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:47.141624   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:13:47.141698   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:47.151896   13262 logs.go:276] 0 containers: []
	W0314 11:13:47.151907   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:47.151962   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:47.167342   13262 logs.go:276] 0 containers: []
	W0314 11:13:47.167355   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:13:47.167360   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:47.167366   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:47.197474   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:47.197490   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:47.202295   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:47.202308   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:47.330000   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:13:47.330013   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:13:47.345706   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:13:47.345723   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:13:47.360383   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:13:47.360396   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:13:47.380541   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:13:47.380555   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:13:47.398987   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:47.398999   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:47.424376   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:13:47.424385   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:47.440412   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:13:47.440430   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:13:47.455182   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:13:47.455200   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:13:47.477334   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:13:47.477352   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:13:47.502963   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:13:47.502978   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:13:50.028068   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:52.006216   13130 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001610 seconds
	I0314 11:13:52.006277   13130 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 11:13:52.009914   13130 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 11:13:52.520530   13130 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 11:13:52.520800   13130 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-636000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 11:13:53.024424   13130 kubeadm.go:309] [bootstrap-token] Using token: djvjdd.cohpa5f8p95pbnzu
	I0314 11:13:53.030964   13130 out.go:204]   - Configuring RBAC rules ...
	I0314 11:13:53.031035   13130 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 11:13:53.031084   13130 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 11:13:53.033325   13130 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 11:13:53.035717   13130 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 11:13:53.036517   13130 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 11:13:53.037377   13130 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 11:13:53.041551   13130 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 11:13:53.225106   13130 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 11:13:53.428396   13130 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 11:13:53.428844   13130 kubeadm.go:309] 
	I0314 11:13:53.428877   13130 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 11:13:53.428881   13130 kubeadm.go:309] 
	I0314 11:13:53.428919   13130 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 11:13:53.428924   13130 kubeadm.go:309] 
	I0314 11:13:53.428939   13130 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 11:13:53.428971   13130 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 11:13:53.428995   13130 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 11:13:53.428997   13130 kubeadm.go:309] 
	I0314 11:13:53.429023   13130 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 11:13:53.429027   13130 kubeadm.go:309] 
	I0314 11:13:53.429048   13130 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 11:13:53.429051   13130 kubeadm.go:309] 
	I0314 11:13:53.429079   13130 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 11:13:53.429120   13130 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 11:13:53.429163   13130 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 11:13:53.429168   13130 kubeadm.go:309] 
	I0314 11:13:53.429209   13130 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 11:13:53.429254   13130 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 11:13:53.429260   13130 kubeadm.go:309] 
	I0314 11:13:53.429302   13130 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token djvjdd.cohpa5f8p95pbnzu \
	I0314 11:13:53.429357   13130 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e5a4174d82744a5f88c6921b8e1e2cb9a0b16334ed79a2160efb286b25bc185 \
	I0314 11:13:53.429370   13130 kubeadm.go:309] 	--control-plane 
	I0314 11:13:53.429375   13130 kubeadm.go:309] 
	I0314 11:13:53.429415   13130 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 11:13:53.429419   13130 kubeadm.go:309] 
	I0314 11:13:53.429460   13130 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token djvjdd.cohpa5f8p95pbnzu \
	I0314 11:13:53.429524   13130 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e5a4174d82744a5f88c6921b8e1e2cb9a0b16334ed79a2160efb286b25bc185 
	I0314 11:13:53.429581   13130 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 11:13:53.429599   13130 cni.go:84] Creating CNI manager for ""
	I0314 11:13:53.429609   13130 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:13:53.433947   13130 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 11:13:53.442947   13130 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 11:13:53.445799   13130 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 11:13:53.451244   13130 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 11:13:53.451309   13130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 11:13:53.451567   13130 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-636000 minikube.k8s.io/updated_at=2024_03_14T11_13_53_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=running-upgrade-636000 minikube.k8s.io/primary=true
	I0314 11:13:53.500327   13130 ops.go:34] apiserver oom_adj: -16
	I0314 11:13:53.500341   13130 kubeadm.go:1106] duration metric: took 49.081958ms to wait for elevateKubeSystemPrivileges
	W0314 11:13:53.500679   13130 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 11:13:53.500684   13130 kubeadm.go:393] duration metric: took 4m11.343798375s to StartCluster
	I0314 11:13:53.500699   13130 settings.go:142] acquiring lock: {Name:mk5ca7daa9f67a4c042500e8aa0b177318634dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:13:53.500851   13130 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:13:53.501408   13130 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/kubeconfig: {Name:mk22117ed76e85ca64a0d4fa77d593f7fc7d1176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:13:53.501726   13130 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:13:53.505916   13130 out.go:177] * Verifying Kubernetes components...
	I0314 11:13:53.501836   13130 config.go:182] Loaded profile config "running-upgrade-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:13:53.501914   13130 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 11:13:53.513937   13130 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-636000"
	I0314 11:13:53.513950   13130 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-636000"
	W0314 11:13:53.513953   13130 addons.go:243] addon storage-provisioner should already be in state true
	I0314 11:13:53.513967   13130 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:13:53.513972   13130 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-636000"
	I0314 11:13:53.513976   13130 host.go:66] Checking if "running-upgrade-636000" exists ...
	I0314 11:13:53.514131   13130 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-636000"
	I0314 11:13:53.517860   13130 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:13:53.524022   13130 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 11:13:53.524031   13130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 11:13:53.524041   13130 sshutil.go:53] new ssh client: &{IP:localhost Port:52096 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/running-upgrade-636000/id_rsa Username:docker}
	I0314 11:13:53.525298   13130 kapi.go:59] client config for running-upgrade-636000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/running-upgrade-636000/client.key", CAFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1045a4630), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 11:13:53.525528   13130 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-636000"
	W0314 11:13:53.525533   13130 addons.go:243] addon default-storageclass should already be in state true
	I0314 11:13:53.525544   13130 host.go:66] Checking if "running-upgrade-636000" exists ...
	I0314 11:13:53.526300   13130 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 11:13:53.526305   13130 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 11:13:53.526309   13130 sshutil.go:53] new ssh client: &{IP:localhost Port:52096 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/running-upgrade-636000/id_rsa Username:docker}
	I0314 11:13:53.609244   13130 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 11:13:53.615242   13130 api_server.go:52] waiting for apiserver process to appear ...
	I0314 11:13:53.615297   13130 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:13:53.620137   13130 api_server.go:72] duration metric: took 118.398459ms to wait for apiserver process to appear ...
	I0314 11:13:53.620151   13130 api_server.go:88] waiting for apiserver healthz status ...
	I0314 11:13:53.620158   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:53.661049   13130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 11:13:53.661664   13130 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 11:13:55.030192   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:55.030324   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:55.041021   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:13:55.041098   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:55.052034   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:13:55.052103   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:55.062082   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:13:55.062157   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:55.072820   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:13:55.072904   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:55.083146   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:13:55.083216   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:55.093493   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:13:55.093566   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:55.103968   13262 logs.go:276] 0 containers: []
	W0314 11:13:55.103978   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:55.104035   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:55.114148   13262 logs.go:276] 0 containers: []
	W0314 11:13:55.114159   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:13:55.114166   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:13:55.114171   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:13:55.132830   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:55.132841   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:55.170242   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:13:55.170254   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:13:55.182967   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:13:55.182981   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:13:55.211099   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:13:55.211110   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:13:55.227132   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:13:55.227149   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:13:55.242803   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:13:55.242819   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:13:55.258476   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:13:55.258495   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:55.274025   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:55.274037   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:55.299897   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:55.299912   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:55.328613   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:13:55.328634   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:13:55.346316   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:13:55.346331   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:13:55.358340   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:13:55.358352   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:13:55.375283   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:55.375294   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:55.379345   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:13:55.379352   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:13:58.622198   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:58.622232   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:57.895094   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:03.622683   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:03.622716   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:02.897429   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:02.897662   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:02.920166   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:02.920267   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:02.935915   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:02.935993   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:02.946897   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:02.946962   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:02.957767   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:02.957849   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:02.968093   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:02.968156   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:02.978429   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:02.978503   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:02.988316   13262 logs.go:276] 0 containers: []
	W0314 11:14:02.988328   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:02.988385   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:02.998227   13262 logs.go:276] 0 containers: []
	W0314 11:14:02.998241   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:02.998249   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:02.998255   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:03.033530   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:03.033541   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:03.046459   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:03.046471   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:03.072408   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:03.072417   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:03.095517   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:03.095531   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:03.107478   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:03.107491   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:03.119505   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:03.119518   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:03.137617   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:03.137631   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:03.166179   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:03.166189   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:03.170551   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:03.170558   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:03.184381   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:03.184392   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:03.201980   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:03.201991   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:03.216028   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:03.216043   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:03.229205   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:03.229216   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:03.244278   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:03.244289   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:08.623311   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:08.623332   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:05.764004   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:13.623807   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:13.623866   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:10.766309   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:10.766497   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:10.783095   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:10.783166   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:10.796582   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:10.796669   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:10.807197   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:10.807270   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:10.818048   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:10.818126   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:10.828543   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:10.828611   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:10.839096   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:10.839168   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:10.849344   13262 logs.go:276] 0 containers: []
	W0314 11:14:10.849354   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:10.849408   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:10.859621   13262 logs.go:276] 0 containers: []
	W0314 11:14:10.859632   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:10.859643   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:10.859651   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:10.874523   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:10.874534   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:10.895051   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:10.895062   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:10.912960   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:10.912976   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:10.939058   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:10.939083   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:10.942927   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:10.942933   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:10.957043   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:10.957054   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:10.968993   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:10.969005   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:10.995748   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:10.995759   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:11.009732   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:11.009743   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:11.021702   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:11.021714   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:11.051446   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:11.051456   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:11.066140   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:11.066153   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:11.077874   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:11.077885   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:11.115610   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:11.115624   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:13.630454   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:18.624683   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:18.624754   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:18.632685   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:18.633025   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:18.666079   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:18.666221   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:18.691685   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:18.691783   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:18.705045   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:18.705122   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:18.717008   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:18.717084   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:18.728329   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:18.728401   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:18.739442   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:18.739516   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:18.749539   13262 logs.go:276] 0 containers: []
	W0314 11:14:18.749551   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:18.749610   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:18.759769   13262 logs.go:276] 0 containers: []
	W0314 11:14:18.759781   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:18.759791   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:18.759797   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:18.764318   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:18.764324   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:18.778369   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:18.778380   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:18.789995   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:18.790009   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:18.807363   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:18.807374   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:18.835706   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:18.835713   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:18.871754   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:18.871765   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:18.885377   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:18.885390   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:18.899202   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:18.899214   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:18.922275   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:18.922287   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:18.935029   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:18.935043   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:18.948555   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:18.948568   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:18.960419   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:18.960430   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:18.981059   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:18.981069   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:18.999130   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:18.999140   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:23.626209   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:23.626243   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0314 11:14:24.018812   13130 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0314 11:14:24.022388   13130 out.go:177] * Enabled addons: storage-provisioner
	I0314 11:14:21.527194   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:24.034383   13130 addons.go:505] duration metric: took 30.533042917s for enable addons: enabled=[storage-provisioner]
	I0314 11:14:28.627746   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:28.627795   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:26.529489   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:26.529635   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:26.547309   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:26.547393   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:26.559361   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:26.559429   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:26.569503   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:26.569571   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:26.580045   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:26.580114   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:26.590052   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:26.590120   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:26.600385   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:26.600458   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:26.610155   13262 logs.go:276] 0 containers: []
	W0314 11:14:26.610169   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:26.610235   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:26.620477   13262 logs.go:276] 0 containers: []
	W0314 11:14:26.620488   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:26.620497   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:26.620504   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:26.633133   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:26.633147   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:26.647477   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:26.647490   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:26.664382   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:26.664392   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:26.688422   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:26.688432   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:26.702650   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:26.702661   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:26.719732   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:26.719743   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:26.731689   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:26.731701   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:26.761958   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:26.761967   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:26.765991   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:26.766002   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:26.801751   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:26.801762   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:26.816378   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:26.816388   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:26.827975   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:26.827989   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:26.852026   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:26.852037   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:26.867038   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:26.867049   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:29.381689   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:33.629582   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:33.629630   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:34.383932   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:34.384134   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:34.404777   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:34.404866   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:34.421099   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:34.421174   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:34.433428   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:34.433505   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:34.444079   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:34.444144   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:34.454677   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:34.454748   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:34.465159   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:34.465227   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:34.475454   13262 logs.go:276] 0 containers: []
	W0314 11:14:34.475466   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:34.475527   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:34.485614   13262 logs.go:276] 0 containers: []
	W0314 11:14:34.485629   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:34.485636   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:34.485642   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:34.497666   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:34.497679   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:34.514670   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:34.514683   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:34.527142   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:34.527155   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:34.562382   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:34.562394   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:34.577236   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:34.577248   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:34.592714   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:34.592726   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:34.623234   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:34.623250   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:34.655674   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:34.655688   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:34.660239   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:34.660246   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:34.685515   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:34.685526   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:34.702496   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:34.702506   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:34.713752   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:34.713764   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:34.736774   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:34.736786   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:34.749680   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:34.749691   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:38.631892   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:38.631931   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:37.264398   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:43.634074   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:43.634123   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:42.266630   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:42.266792   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:42.289017   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:42.289129   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:42.304606   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:42.304682   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:42.318147   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:42.318213   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:42.328901   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:42.328972   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:42.342350   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:42.342417   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:42.352535   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:42.352599   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:42.362528   13262 logs.go:276] 0 containers: []
	W0314 11:14:42.362540   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:42.362612   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:42.372185   13262 logs.go:276] 0 containers: []
	W0314 11:14:42.372198   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:42.372206   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:42.372213   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:42.385090   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:42.385101   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:42.407678   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:42.407688   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:42.421293   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:42.421310   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:42.450343   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:42.450351   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:42.464119   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:42.464130   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:42.479222   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:42.479234   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:42.496346   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:42.496356   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:42.520635   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:42.520644   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:42.534728   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:42.534739   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:42.545650   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:42.545660   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:42.558079   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:42.558095   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:42.575472   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:42.575485   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:42.579835   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:42.579844   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:42.632748   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:42.632760   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:45.152845   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:48.636282   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:48.636326   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:50.155134   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:50.155324   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:50.176526   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:50.176625   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:50.191306   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:50.191384   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:50.203120   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:50.203192   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:50.213969   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:50.214038   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:50.224955   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:50.225021   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:50.235362   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:50.235434   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:50.247352   13262 logs.go:276] 0 containers: []
	W0314 11:14:50.247366   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:50.247426   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:50.257312   13262 logs.go:276] 0 containers: []
	W0314 11:14:50.257326   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:50.257335   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:50.257342   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:50.261497   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:50.261503   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:50.279377   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:50.279393   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:50.294261   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:50.294275   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:50.330390   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:50.330402   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:50.361032   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:50.361040   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:50.396027   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:50.396043   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:53.636651   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:53.636796   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:53.655431   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:14:53.655504   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:53.667193   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:14:53.667262   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:53.678730   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:14:53.678803   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:53.690722   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:14:53.690803   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:53.702231   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:14:53.702302   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:53.713613   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:14:53.713685   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:53.724470   13130 logs.go:276] 0 containers: []
	W0314 11:14:53.724482   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:53.724541   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:53.735697   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:14:53.735719   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:53.735725   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:53.760119   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:14:53.760127   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:53.772569   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:53.772580   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:53.807183   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:53.807191   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:53.843992   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:14:53.844006   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:14:53.859455   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:14:53.859468   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:14:53.872413   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:14:53.872425   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:14:53.888189   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:14:53.888202   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:14:53.900429   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:53.900442   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:53.905001   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:14:53.905015   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:14:53.920244   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:14:53.920256   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:14:53.932652   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:14:53.932666   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:14:53.945562   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:14:53.945572   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:14:50.409932   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:50.413109   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:50.431873   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:50.431883   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:50.447000   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:50.447015   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:50.458764   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:50.458776   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:50.482510   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:50.482520   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:50.494397   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:50.494415   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:50.518510   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:50.518520   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:50.530951   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:50.530962   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:53.046259   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:56.464646   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:58.048576   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:58.048747   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:58.065701   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:58.065790   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:58.079237   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:58.079316   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:58.090618   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:58.090690   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:58.101412   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:58.101496   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:58.111884   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:58.111951   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:58.123095   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:58.123169   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:58.132947   13262 logs.go:276] 0 containers: []
	W0314 11:14:58.132959   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:58.133024   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:58.142998   13262 logs.go:276] 0 containers: []
	W0314 11:14:58.143011   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:58.143020   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:58.143025   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:58.160602   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:58.160612   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:58.185451   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:58.185458   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:58.197783   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:58.197792   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:58.221132   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:58.221142   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:58.233510   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:58.233521   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:58.252109   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:58.252120   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:58.266539   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:58.266549   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:58.280264   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:58.280276   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:58.294282   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:58.294292   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:58.319556   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:58.319567   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:58.331574   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:58.331585   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:58.362341   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:58.362350   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:58.366212   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:58.366221   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:58.400919   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:58.400930   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:01.466728   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:01.466844   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:01.478997   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:01.479075   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:01.490171   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:01.490234   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:01.501566   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:01.501630   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:01.513510   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:01.513583   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:01.527936   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:01.528010   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:01.539669   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:01.539740   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:01.550472   13130 logs.go:276] 0 containers: []
	W0314 11:15:01.550488   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:01.550548   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:01.561595   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:01.561612   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:01.561617   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:01.574549   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:01.574560   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:01.586962   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:01.586982   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:01.602488   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:01.602501   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:01.620293   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:01.620304   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:01.632188   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:01.632199   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:01.657076   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:01.657086   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:01.672287   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:01.672297   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:01.687221   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:01.687231   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:01.698866   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:01.698879   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:01.737689   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:01.737701   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:01.751867   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:01.751877   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:01.787572   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:01.787580   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:00.917224   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:04.293775   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:05.919351   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:05.919490   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:05.932217   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:05.932287   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:05.943612   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:05.943675   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:05.953757   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:05.953831   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:05.964057   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:05.964121   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:05.974063   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:05.974140   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:05.984725   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:05.984804   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:05.994634   13262 logs.go:276] 0 containers: []
	W0314 11:15:05.994652   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:05.994715   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:06.004775   13262 logs.go:276] 0 containers: []
	W0314 11:15:06.004787   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:06.004795   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:06.004801   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:06.016362   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:06.016374   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:06.052427   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:06.052438   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:06.065085   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:06.065097   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:06.088326   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:06.088338   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:06.113389   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:06.113400   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:06.143143   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:06.143153   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:06.164107   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:06.164118   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:06.178101   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:06.178111   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:06.192533   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:06.192544   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:06.203889   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:06.203900   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:06.224749   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:06.224760   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:06.242937   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:06.242950   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:06.247603   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:06.247609   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:06.265747   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:06.265757   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:08.778821   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:09.296267   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:09.296611   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:09.330731   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:09.330859   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:09.349930   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:09.350016   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:09.363589   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:09.363666   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:09.375272   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:09.375348   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:09.386471   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:09.386541   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:09.397027   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:09.397097   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:09.407748   13130 logs.go:276] 0 containers: []
	W0314 11:15:09.407759   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:09.407828   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:09.418865   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:09.418881   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:09.418887   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:09.455690   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:09.455701   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:09.468149   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:09.468159   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:09.479984   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:09.479995   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:09.495618   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:09.495628   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:09.507706   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:09.507717   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:09.519790   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:09.519800   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:09.558300   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:09.558309   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:09.562598   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:09.562603   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:09.577103   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:09.577114   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:09.592347   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:09.592357   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:09.610401   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:09.610412   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:09.633783   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:09.633791   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:12.149170   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:13.781103   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:13.781261   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:13.797929   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:13.798014   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:13.810609   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:13.810682   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:13.826517   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:13.826595   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:13.837086   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:13.837151   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:13.847469   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:13.847527   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:13.857724   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:13.857792   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:13.867214   13262 logs.go:276] 0 containers: []
	W0314 11:15:13.867226   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:13.867289   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:13.877677   13262 logs.go:276] 0 containers: []
	W0314 11:15:13.877691   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:13.877699   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:13.877708   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:13.891870   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:13.891881   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:13.903726   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:13.903739   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:13.917490   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:13.917501   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:13.932715   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:13.932726   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:13.957178   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:13.957186   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:13.992886   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:13.992898   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:14.006729   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:14.006740   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:14.019430   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:14.019442   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:14.047998   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:14.048006   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:14.060472   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:14.060484   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:14.084090   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:14.084103   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:14.096749   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:14.096764   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:14.120579   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:14.120591   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:14.138210   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:14.138224   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:17.151442   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:17.151722   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:17.179296   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:17.179418   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:17.197231   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:17.197318   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:17.210333   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:17.210409   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:17.229772   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:17.229842   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:17.241627   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:17.241704   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:17.252779   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:17.252848   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:17.262926   13130 logs.go:276] 0 containers: []
	W0314 11:15:17.262939   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:17.262998   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:17.275772   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:17.275789   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:17.275794   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:17.310356   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:17.310368   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:17.330879   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:17.330894   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:17.348961   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:17.348971   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:17.373503   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:17.373513   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:17.392636   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:17.392650   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:17.397596   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:17.397602   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:17.434465   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:17.434477   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:17.448984   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:17.448996   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:17.463628   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:17.463643   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:17.476513   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:17.476523   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:17.488133   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:17.488145   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:17.499825   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:17.499836   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:16.643048   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:20.013769   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:21.645330   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:21.645520   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:21.659644   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:21.659717   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:21.672335   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:21.672418   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:21.683299   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:21.683371   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:21.694170   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:21.694243   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:21.708210   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:21.708284   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:21.719111   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:21.719181   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:21.729202   13262 logs.go:276] 0 containers: []
	W0314 11:15:21.729215   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:21.729279   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:21.741010   13262 logs.go:276] 0 containers: []
	W0314 11:15:21.741024   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:21.741033   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:21.741039   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:21.755888   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:21.755901   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:21.769561   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:21.769573   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:21.793726   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:21.793738   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:21.808530   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:21.808540   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:21.833331   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:21.833339   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:21.844480   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:21.844492   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:21.848442   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:21.848449   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:21.883517   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:21.883528   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:21.901896   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:21.901905   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:21.917898   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:21.917908   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:21.946687   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:21.946698   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:21.961301   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:21.961311   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:21.973895   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:21.973905   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:21.991736   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:21.991748   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:24.507286   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:25.015997   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:25.016237   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:25.041677   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:25.041798   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:25.057943   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:25.058022   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:25.071216   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:25.071278   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:25.083055   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:25.083128   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:25.094064   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:25.094140   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:25.105229   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:25.105296   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:25.118525   13130 logs.go:276] 0 containers: []
	W0314 11:15:25.118535   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:25.118588   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:25.129875   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:25.129895   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:25.129900   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:25.166069   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:25.166081   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:25.183061   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:25.183076   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:25.195383   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:25.195393   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:25.218422   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:25.218435   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:25.233947   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:25.233958   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:25.257598   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:25.257606   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:25.291723   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:25.291733   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:25.311338   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:25.311351   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:25.327928   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:25.327940   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:25.344134   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:25.344146   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:25.356556   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:25.356569   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:25.368915   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:25.368924   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:27.875435   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:29.509533   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:29.509795   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:29.535031   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:29.535148   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:29.552628   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:29.552707   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:29.565714   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:29.565794   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:29.577584   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:29.577655   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:29.591850   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:29.591915   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:29.602638   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:29.602711   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:29.612908   13262 logs.go:276] 0 containers: []
	W0314 11:15:29.612919   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:29.612976   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:29.622953   13262 logs.go:276] 0 containers: []
	W0314 11:15:29.622963   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:29.622971   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:29.622976   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:29.647538   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:29.647547   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:29.685682   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:29.685695   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:29.698395   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:29.698404   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:29.716583   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:29.716594   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:29.742671   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:29.742683   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:29.759455   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:29.759466   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:29.783600   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:29.783611   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:29.797877   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:29.797886   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:29.802241   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:29.802248   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:29.814188   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:29.814200   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:29.825871   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:29.825883   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:29.838211   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:29.838223   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:29.867398   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:29.867418   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:29.888492   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:29.888504   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:32.877852   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:32.878052   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:32.901215   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:32.901307   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:32.918056   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:32.918139   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:32.931399   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:32.931462   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:32.943101   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:32.943168   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:32.953965   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:32.954042   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:32.965104   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:32.965162   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:32.975572   13130 logs.go:276] 0 containers: []
	W0314 11:15:32.975583   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:32.975642   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:32.986755   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:32.986770   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:32.986775   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:33.001334   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:33.001348   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:33.016020   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:33.016030   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:33.030119   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:33.030130   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:33.042299   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:33.042309   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:33.060767   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:33.060777   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:33.096911   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:33.096919   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:33.101843   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:33.101848   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:33.142677   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:33.142691   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:33.165280   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:33.165287   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:33.176882   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:33.176895   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:33.189920   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:33.189930   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:33.205760   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:33.205770   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:32.404364   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:35.719944   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:37.406588   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:37.406778   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:37.428578   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:37.428670   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:37.441906   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:37.441980   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:37.453210   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:37.453283   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:37.463645   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:37.463719   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:37.474432   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:37.474494   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:37.493102   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:37.493176   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:37.503721   13262 logs.go:276] 0 containers: []
	W0314 11:15:37.503733   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:37.503785   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:37.514105   13262 logs.go:276] 0 containers: []
	W0314 11:15:37.514116   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:37.514125   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:37.514130   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:37.532533   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:37.532545   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:37.551135   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:37.551145   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:37.562884   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:37.562894   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:37.591521   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:37.591533   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:37.602796   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:37.602806   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:37.626209   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:37.626215   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:37.630343   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:37.630350   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:37.670981   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:37.670992   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:37.686310   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:37.686321   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:37.702581   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:37.702590   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:37.716957   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:37.716968   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:37.731708   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:37.731719   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:37.755546   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:37.755559   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:37.770896   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:37.770910   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:40.283038   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:40.722189   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:40.722357   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:40.735909   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:40.735995   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:40.747455   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:40.747525   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:40.758279   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:40.758374   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:40.769862   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:40.769934   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:40.781977   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:40.782049   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:40.793616   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:40.793686   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:40.804807   13130 logs.go:276] 0 containers: []
	W0314 11:15:40.804819   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:40.804879   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:40.816758   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:40.816775   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:40.816780   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:40.829506   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:40.829514   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:40.854741   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:40.854749   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:40.866828   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:40.866839   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:40.901372   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:40.901383   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:40.905911   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:40.905920   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:40.920936   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:40.920946   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:40.935969   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:40.935982   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:40.953571   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:40.953579   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:40.995810   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:40.995824   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:41.008287   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:41.008298   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:41.020665   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:41.020676   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:41.036912   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:41.036923   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:43.551317   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:45.285285   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:45.285578   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:45.318424   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:45.318590   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:45.339202   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:45.339306   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:45.355160   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:45.355245   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:45.367286   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:45.367368   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:45.378091   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:45.378159   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:45.388871   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:45.388942   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:45.399032   13262 logs.go:276] 0 containers: []
	W0314 11:15:45.399049   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:45.399109   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:48.552139   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:48.552249   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:48.564152   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:48.564226   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:48.579370   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:48.579442   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:48.590515   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:48.590584   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:48.601549   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:48.601622   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:48.612231   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:48.612306   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:48.623388   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:48.623461   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:48.641732   13130 logs.go:276] 0 containers: []
	W0314 11:15:48.641747   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:48.641804   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:48.655945   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:48.655961   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:48.655966   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:48.690375   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:48.690387   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:48.714180   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:48.714192   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:48.726524   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:48.726535   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:48.738466   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:48.738476   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:48.754681   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:48.754693   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:48.775427   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:48.775439   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:48.793066   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:48.793079   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:48.816746   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:48.816755   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:48.821410   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:48.821418   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:48.862378   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:48.862389   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:48.877018   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:48.877030   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:48.889717   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:48.889728   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:45.409587   13262 logs.go:276] 0 containers: []
	W0314 11:15:45.412120   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:45.412136   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:45.412147   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:45.416286   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:45.416293   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:45.430455   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:45.430466   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:45.449897   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:45.449907   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:45.474680   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:45.474686   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:45.486185   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:45.486194   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:45.516498   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:45.516508   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:45.551839   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:45.551853   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:45.564713   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:45.564724   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:45.588044   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:45.588055   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:45.602778   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:45.602791   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:45.614425   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:45.614437   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:45.626070   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:45.626082   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:45.640358   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:45.640371   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:45.655120   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:45.655132   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:48.173958   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:51.403419   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:53.176612   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:53.176958   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:53.214887   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:53.215034   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:53.237482   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:53.237579   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:53.252464   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:53.252540   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:53.265147   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:53.265225   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:53.276228   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:53.276298   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:53.287285   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:53.287351   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:53.298434   13262 logs.go:276] 0 containers: []
	W0314 11:15:53.298447   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:53.298511   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:53.309734   13262 logs.go:276] 0 containers: []
	W0314 11:15:53.309745   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:53.309755   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:53.309761   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:53.339171   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:53.339182   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:53.352145   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:53.352157   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:53.371189   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:53.371201   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:53.382268   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:53.382279   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:53.397295   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:53.397305   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:53.415284   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:53.415293   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:53.437640   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:53.437651   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:53.449517   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:53.449529   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:53.467819   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:53.467830   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:53.492801   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:53.492809   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:53.515559   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:53.515570   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:53.519747   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:53.519753   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:53.555675   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:53.555692   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:53.576966   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:53.576976   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:56.405612   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:56.405777   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:56.425467   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:15:56.425556   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:56.438772   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:15:56.438845   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:56.449957   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:15:56.450030   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:56.460053   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:15:56.460126   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:56.470804   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:15:56.470880   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:56.481373   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:15:56.481438   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:56.498906   13130 logs.go:276] 0 containers: []
	W0314 11:15:56.498918   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:56.498976   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:56.509867   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:15:56.509882   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:56.509887   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:56.545474   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:56.545483   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:56.550255   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:15:56.550263   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:15:56.563882   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:15:56.563893   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:15:56.575676   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:15:56.575690   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:15:56.591513   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:15:56.591523   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:15:56.608979   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:56.608990   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:56.634323   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:56.634332   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:56.669019   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:15:56.669033   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:15:56.684495   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:15:56.684505   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:15:56.696961   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:15:56.696970   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:15:56.708860   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:15:56.708871   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:15:56.721647   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:15:56.721659   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:56.091539   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:59.235599   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:01.093787   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:01.093995   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:01.112699   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:01.112790   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:01.126473   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:01.126553   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:01.138051   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:01.138123   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:01.148597   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:01.148673   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:01.158749   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:01.158823   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:01.169483   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:01.169551   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:01.179798   13262 logs.go:276] 0 containers: []
	W0314 11:16:01.179809   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:01.179868   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:01.190728   13262 logs.go:276] 0 containers: []
	W0314 11:16:01.190738   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:01.190747   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:01.190753   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:01.226013   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:01.226024   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:01.238179   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:01.238191   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:01.253563   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:01.253575   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:01.258108   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:01.258116   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:01.272086   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:01.272097   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:01.285160   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:01.285174   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:01.302637   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:01.302652   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:01.325899   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:01.325910   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:01.337983   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:01.337996   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:01.369108   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:01.369123   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:01.381665   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:01.381677   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:01.395297   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:01.395312   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:01.415489   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:01.415502   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:01.438947   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:01.438963   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:03.965764   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:04.237784   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:04.238024   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:04.263547   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:04.263663   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:04.279892   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:04.279975   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:04.292933   13130 logs.go:276] 2 containers: [e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:04.293007   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:04.304652   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:04.304728   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:04.315989   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:04.316068   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:04.326445   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:04.326509   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:04.337487   13130 logs.go:276] 0 containers: []
	W0314 11:16:04.337498   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:04.337563   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:04.347853   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:04.347869   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:04.347874   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:04.359646   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:04.359661   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:04.371581   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:04.371590   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:04.398915   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:04.398925   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:04.423753   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:04.423762   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:04.458604   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:04.458615   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:04.472978   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:04.472990   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:04.488141   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:04.488153   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:04.499634   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:04.499647   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:04.514914   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:04.514926   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:04.526366   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:04.526378   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:04.538336   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:04.538345   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:04.574182   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:04.574191   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:07.170060   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:09.058052   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:09.058267   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:09.082640   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:09.082739   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:09.098727   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:09.098795   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:09.111380   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:09.111450   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:09.123065   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:09.123137   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:09.133167   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:09.133233   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:09.143760   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:09.143826   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:09.154410   13262 logs.go:276] 0 containers: []
	W0314 11:16:09.154423   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:09.154488   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:09.165097   13262 logs.go:276] 0 containers: []
	W0314 11:16:09.165109   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:09.165119   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:09.165125   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:09.200547   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:09.200559   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:09.223884   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:09.223895   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:09.235879   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:09.235893   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:09.265667   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:09.265676   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:09.277272   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:09.277283   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:09.294601   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:09.294613   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:09.311801   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:09.311811   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:09.334966   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:09.334974   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:09.346974   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:09.346986   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:09.351659   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:09.351666   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:09.365944   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:09.365958   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:09.379283   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:09.379293   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:09.400739   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:09.400750   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:09.415443   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:09.415457   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:12.172474   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:12.173179   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:12.197204   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:12.197296   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:12.212657   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:12.212731   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:12.224769   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:12.224842   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:12.235077   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:12.235149   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:12.245803   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:12.245866   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:12.256072   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:12.256144   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:12.270791   13130 logs.go:276] 0 containers: []
	W0314 11:16:12.270805   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:12.270862   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:12.281552   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:12.281570   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:12.281577   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:12.293908   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:12.293921   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:12.309017   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:12.309031   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:12.345260   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:12.345268   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:12.361057   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:12.361067   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:12.379958   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:12.379969   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:12.391660   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:12.391674   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:12.405448   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:12.405459   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:12.430509   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:12.430517   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:12.448981   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:12.448994   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:12.460436   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:12.460446   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:12.477388   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:12.477399   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:12.495270   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:12.495280   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:12.500018   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:12.500027   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:12.541505   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:12.541517   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:11.932400   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:15.059651   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:16.934791   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:16.935042   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:16.958886   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:16.959006   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:16.979485   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:16.979572   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:16.991411   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:16.991484   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:17.002301   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:17.002372   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:17.013094   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:17.013166   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:17.023697   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:17.023782   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:17.033925   13262 logs.go:276] 0 containers: []
	W0314 11:16:17.033937   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:17.033995   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:17.043897   13262 logs.go:276] 0 containers: []
	W0314 11:16:17.043908   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:17.043918   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:17.043924   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:17.059048   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:17.059061   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:17.082981   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:17.082993   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:17.113517   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:17.113528   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:17.117539   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:17.117548   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:17.130034   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:17.130045   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:17.145032   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:17.145043   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:17.156302   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:17.156315   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:17.179298   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:17.179308   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:17.193102   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:17.193114   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:17.204429   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:17.204444   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:17.221963   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:17.221974   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:17.244695   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:17.244706   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:17.256282   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:17.256294   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:17.290913   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:17.290926   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:19.807359   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:20.062128   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:20.062497   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:20.099176   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:20.099324   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:20.119498   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:20.119599   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:20.135525   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:20.135608   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:20.147699   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:20.147765   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:20.158469   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:20.158540   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:20.169253   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:20.169314   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:20.179374   13130 logs.go:276] 0 containers: []
	W0314 11:16:20.179388   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:20.179451   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:20.189981   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:20.189998   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:20.190007   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:20.194959   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:20.194964   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:20.207723   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:20.207736   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:20.221563   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:20.221574   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:20.234139   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:20.234148   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:20.250200   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:20.250210   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:20.261875   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:20.261889   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:20.276985   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:20.276997   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:20.294938   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:20.294948   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:20.331323   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:20.331335   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:20.365896   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:20.365907   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:20.386967   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:20.386983   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:20.400305   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:20.400315   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:20.412530   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:20.412540   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:20.437468   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:20.437475   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:22.950205   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:24.809952   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:24.810103   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:24.822711   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:24.822781   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:24.833488   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:24.833558   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:24.845290   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:24.845364   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:24.856973   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:24.857048   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:24.867491   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:24.867559   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:24.878105   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:24.878178   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:24.888469   13262 logs.go:276] 0 containers: []
	W0314 11:16:24.888480   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:24.888538   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:24.898689   13262 logs.go:276] 0 containers: []
	W0314 11:16:24.898701   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:24.898709   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:24.898715   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:24.912222   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:24.912235   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:24.916439   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:24.916445   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:24.950545   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:24.950560   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:24.968318   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:24.968330   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:24.982230   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:24.982240   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:24.993216   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:24.993227   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:25.004279   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:25.004290   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:25.016136   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:25.016148   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:25.031324   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:25.031336   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:25.045687   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:25.045700   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:25.061100   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:25.061113   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:25.078355   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:25.078365   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:25.102139   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:25.102147   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:25.129770   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:25.129778   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:27.953018   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:27.953259   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:27.978185   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:27.978287   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:27.994532   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:27.994618   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:28.007962   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:28.008039   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:28.019014   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:28.019082   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:28.029572   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:28.029644   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:28.039998   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:28.040068   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:28.049764   13130 logs.go:276] 0 containers: []
	W0314 11:16:28.049777   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:28.049832   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:28.060705   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:28.060725   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:28.060731   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:28.065257   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:28.065268   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:28.076994   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:28.077003   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:28.089057   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:28.089070   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:28.127848   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:28.127860   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:28.142872   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:28.142883   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:28.156912   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:28.156923   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:28.167944   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:28.167955   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:28.185216   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:28.185227   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:28.209022   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:28.209032   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:28.242570   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:28.242578   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:28.260399   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:28.260409   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:28.271605   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:28.271619   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:28.283071   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:28.283081   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:28.298080   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:28.298093   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:27.659784   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:30.816953   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:32.662153   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:32.662366   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:32.682657   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:32.682769   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:32.705460   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:32.705540   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:32.716678   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:32.716748   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:32.727632   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:32.727703   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:32.743180   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:32.743253   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:32.753642   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:32.753706   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:32.765498   13262 logs.go:276] 0 containers: []
	W0314 11:16:32.765513   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:32.765572   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:32.777752   13262 logs.go:276] 0 containers: []
	W0314 11:16:32.777771   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:32.777779   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:32.777784   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:32.795428   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:32.795438   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:32.808286   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:32.808298   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:32.823016   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:32.823028   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:32.836841   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:32.836853   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:32.878150   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:32.878164   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:32.892493   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:32.892504   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:32.914751   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:32.914764   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:32.929869   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:32.929879   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:32.942095   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:32.942109   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:32.958836   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:32.958847   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:32.963670   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:32.963676   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:32.976903   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:32.976914   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:32.988082   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:32.988095   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:33.011633   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:33.011643   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:35.819282   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:35.819443   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:35.831112   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:35.831184   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:35.842073   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:35.842138   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:35.853711   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:35.853801   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:35.865951   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:35.866021   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:35.876929   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:35.877002   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:35.887575   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:35.887653   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:35.898042   13130 logs.go:276] 0 containers: []
	W0314 11:16:35.898057   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:35.898122   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:35.909371   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:35.909393   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:35.909399   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:35.921184   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:35.921195   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:35.933355   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:35.933370   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:35.951364   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:35.951375   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:35.966069   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:35.966079   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:35.971092   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:35.971099   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:36.005472   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:36.005482   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:36.017222   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:36.017231   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:36.028999   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:36.029013   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:36.043881   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:36.043893   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:36.055496   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:36.055509   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:36.090869   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:36.090878   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:36.105345   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:36.105355   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:36.119447   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:36.119458   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:36.131509   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:36.131520   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:38.658927   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:35.541625   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:43.661365   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:43.661561   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:43.689261   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:43.689381   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:43.703037   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:43.703125   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:43.715230   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:43.715306   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:43.726062   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:43.726137   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:43.736352   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:43.736426   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:43.746803   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:43.746873   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:43.757085   13130 logs.go:276] 0 containers: []
	W0314 11:16:43.757098   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:43.757168   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:43.768228   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:43.768244   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:43.768250   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:43.782491   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:43.782502   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:43.794358   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:43.794369   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:43.806405   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:43.806418   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:43.824223   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:43.824233   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:43.837337   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:43.837348   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:43.842094   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:43.842102   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:43.879444   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:43.879458   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:43.891506   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:43.891519   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:43.903522   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:43.903534   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:43.915234   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:43.915245   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:43.928923   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:43.928934   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:43.964454   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:43.964464   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:43.978583   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:43.978593   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:43.994061   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:43.994070   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:40.543923   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:40.544017   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:40.555024   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:40.555103   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:40.565496   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:40.565564   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:40.576068   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:40.576135   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:40.587127   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:40.587202   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:40.597471   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:40.597533   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:40.608168   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:40.608244   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:40.618057   13262 logs.go:276] 0 containers: []
	W0314 11:16:40.618068   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:40.618122   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:40.627899   13262 logs.go:276] 0 containers: []
	W0314 11:16:40.627914   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:40.627922   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:40.627928   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:40.640865   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:40.640876   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:40.658911   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:40.658922   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:40.682566   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:40.682579   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:40.712115   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:40.712127   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:40.716220   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:40.716228   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:40.750346   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:40.750360   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:40.764226   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:40.764236   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:40.787458   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:40.787469   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:40.805664   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:40.805674   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:40.817546   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:40.817557   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:40.831760   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:40.831775   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:40.843506   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:40.843517   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:40.859345   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:40.859356   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:40.886032   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:40.886041   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:43.402145   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:46.520675   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:48.404795   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:48.404955   13262 kubeadm.go:591] duration metric: took 4m2.971525875s to restartPrimaryControlPlane
	W0314 11:16:48.405086   13262 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 11:16:48.405121   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0314 11:16:49.344695   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 11:16:49.349573   13262 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 11:16:49.352229   13262 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 11:16:49.354869   13262 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 11:16:49.354874   13262 kubeadm.go:156] found existing configuration files:
	
	I0314 11:16:49.354891   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/admin.conf
	I0314 11:16:49.357572   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 11:16:49.357599   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 11:16:49.360141   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/kubelet.conf
	I0314 11:16:49.363058   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 11:16:49.363081   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 11:16:49.366113   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/controller-manager.conf
	I0314 11:16:49.368712   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 11:16:49.368735   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 11:16:49.371405   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/scheduler.conf
	I0314 11:16:49.374417   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 11:16:49.374441   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 11:16:49.376967   13262 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 11:16:49.394374   13262 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0314 11:16:49.394490   13262 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 11:16:49.447845   13262 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 11:16:49.447938   13262 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 11:16:49.448006   13262 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 11:16:49.500170   13262 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 11:16:49.508117   13262 out.go:204]   - Generating certificates and keys ...
	I0314 11:16:49.508155   13262 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 11:16:49.508189   13262 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 11:16:49.508229   13262 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 11:16:49.508263   13262 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 11:16:49.508301   13262 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 11:16:49.508327   13262 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 11:16:49.508362   13262 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 11:16:49.508436   13262 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 11:16:49.508497   13262 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 11:16:49.508536   13262 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 11:16:49.508559   13262 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 11:16:49.508595   13262 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 11:16:49.586364   13262 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 11:16:49.630491   13262 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 11:16:49.819305   13262 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 11:16:50.001724   13262 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 11:16:50.031103   13262 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 11:16:50.031479   13262 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 11:16:50.031598   13262 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 11:16:50.101389   13262 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 11:16:50.104752   13262 out.go:204]   - Booting up control plane ...
	I0314 11:16:50.104806   13262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 11:16:50.104855   13262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 11:16:50.104893   13262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 11:16:50.104933   13262 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 11:16:50.105934   13262 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 11:16:51.522889   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:51.523005   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:51.534390   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:51.534462   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:51.545630   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:51.545708   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:51.557496   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:51.557581   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:51.568998   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:51.569078   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:51.580712   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:51.580807   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:51.592717   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:51.592785   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:51.603919   13130 logs.go:276] 0 containers: []
	W0314 11:16:51.603933   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:51.603996   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:51.615457   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:51.615475   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:51.615484   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:51.620360   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:51.620372   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:51.632939   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:51.632954   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:51.657963   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:51.657978   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:51.671455   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:51.671468   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:51.708887   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:51.708906   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:51.762455   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:51.762470   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:51.777527   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:51.777541   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:51.793855   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:51.793872   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:51.806434   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:51.806448   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:51.821409   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:51.821424   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:51.838716   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:51.838729   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:51.860558   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:51.860571   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:51.872799   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:51.872813   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:51.889039   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:51.889050   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:54.107582   13262 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001759 seconds
	I0314 11:16:54.107642   13262 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 11:16:54.112265   13262 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 11:16:54.619559   13262 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 11:16:54.619787   13262 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-157000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 11:16:55.125455   13262 kubeadm.go:309] [bootstrap-token] Using token: xpsove.xojynfksv7i7mjeh
	I0314 11:16:55.131880   13262 out.go:204]   - Configuring RBAC rules ...
	I0314 11:16:55.131957   13262 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 11:16:55.132009   13262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 11:16:55.138884   13262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 11:16:55.139934   13262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 11:16:55.140818   13262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 11:16:55.141678   13262 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 11:16:55.145198   13262 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 11:16:55.275826   13262 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 11:16:55.530158   13262 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 11:16:55.530658   13262 kubeadm.go:309] 
	I0314 11:16:55.530694   13262 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 11:16:55.530700   13262 kubeadm.go:309] 
	I0314 11:16:55.530751   13262 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 11:16:55.530757   13262 kubeadm.go:309] 
	I0314 11:16:55.530778   13262 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 11:16:55.530822   13262 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 11:16:55.530851   13262 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 11:16:55.530854   13262 kubeadm.go:309] 
	I0314 11:16:55.530885   13262 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 11:16:55.530888   13262 kubeadm.go:309] 
	I0314 11:16:55.530921   13262 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 11:16:55.530926   13262 kubeadm.go:309] 
	I0314 11:16:55.530959   13262 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 11:16:55.531000   13262 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 11:16:55.531042   13262 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 11:16:55.531047   13262 kubeadm.go:309] 
	I0314 11:16:55.531099   13262 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 11:16:55.531156   13262 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 11:16:55.531161   13262 kubeadm.go:309] 
	I0314 11:16:55.531225   13262 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xpsove.xojynfksv7i7mjeh \
	I0314 11:16:55.531288   13262 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e5a4174d82744a5f88c6921b8e1e2cb9a0b16334ed79a2160efb286b25bc185 \
	I0314 11:16:55.531308   13262 kubeadm.go:309] 	--control-plane 
	I0314 11:16:55.531312   13262 kubeadm.go:309] 
	I0314 11:16:55.531382   13262 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 11:16:55.531389   13262 kubeadm.go:309] 
	I0314 11:16:55.531434   13262 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xpsove.xojynfksv7i7mjeh \
	I0314 11:16:55.531499   13262 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e5a4174d82744a5f88c6921b8e1e2cb9a0b16334ed79a2160efb286b25bc185 
	I0314 11:16:55.531643   13262 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 11:16:55.531691   13262 cni.go:84] Creating CNI manager for ""
	I0314 11:16:55.531703   13262 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:16:55.533133   13262 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 11:16:55.541116   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 11:16:55.543986   13262 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 11:16:55.549116   13262 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 11:16:55.549155   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 11:16:55.549223   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-157000 minikube.k8s.io/updated_at=2024_03_14T11_16_55_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=stopped-upgrade-157000 minikube.k8s.io/primary=true
	I0314 11:16:55.588760   13262 ops.go:34] apiserver oom_adj: -16
	I0314 11:16:55.588775   13262 kubeadm.go:1106] duration metric: took 39.654375ms to wait for elevateKubeSystemPrivileges
	W0314 11:16:55.588794   13262 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 11:16:55.588799   13262 kubeadm.go:393] duration metric: took 4m10.169230833s to StartCluster
	I0314 11:16:55.588808   13262 settings.go:142] acquiring lock: {Name:mk5ca7daa9f67a4c042500e8aa0b177318634dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:16:55.588894   13262 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:16:55.589955   13262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/kubeconfig: {Name:mk22117ed76e85ca64a0d4fa77d593f7fc7d1176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:16:55.590147   13262 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:16:55.593174   13262 out.go:177] * Verifying Kubernetes components...
	I0314 11:16:55.590154   13262 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 11:16:55.590323   13262 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:16:55.601182   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:16:55.601193   13262 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-157000"
	I0314 11:16:55.601215   13262 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-157000"
	W0314 11:16:55.601220   13262 addons.go:243] addon storage-provisioner should already be in state true
	I0314 11:16:55.601221   13262 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-157000"
	I0314 11:16:55.601266   13262 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-157000"
	I0314 11:16:55.601241   13262 host.go:66] Checking if "stopped-upgrade-157000" exists ...
	I0314 11:16:55.603190   13262 kapi.go:59] client config for stopped-upgrade-157000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/client.key", CAFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105dd8630), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 11:16:55.603413   13262 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-157000"
	W0314 11:16:55.603420   13262 addons.go:243] addon default-storageclass should already be in state true
	I0314 11:16:55.603428   13262 host.go:66] Checking if "stopped-upgrade-157000" exists ...
	I0314 11:16:55.608097   13262 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:16:54.409509   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:55.612050   13262 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 11:16:55.612060   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 11:16:55.612070   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	I0314 11:16:55.612820   13262 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 11:16:55.612823   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 11:16:55.612828   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	I0314 11:16:55.681361   13262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 11:16:55.685924   13262 api_server.go:52] waiting for apiserver process to appear ...
	I0314 11:16:55.685970   13262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:16:55.689860   13262 api_server.go:72] duration metric: took 99.698667ms to wait for apiserver process to appear ...
	I0314 11:16:55.689866   13262 api_server.go:88] waiting for apiserver healthz status ...
	I0314 11:16:55.689873   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:55.711957   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 11:16:55.722921   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 11:16:59.411771   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:59.411992   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:59.437661   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:16:59.437752   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:59.451981   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:16:59.452052   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:59.463283   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:16:59.463353   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:59.473683   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:16:59.473742   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:59.484961   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:16:59.485033   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:59.495590   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:16:59.495652   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:59.506057   13130 logs.go:276] 0 containers: []
	W0314 11:16:59.506070   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:59.506126   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:59.516763   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:16:59.516780   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:16:59.516788   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:16:59.531001   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:16:59.531011   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:16:59.543306   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:16:59.543318   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:16:59.554923   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:16:59.554933   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:16:59.566884   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:59.566894   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:59.571933   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:59.571940   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:59.606617   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:16:59.606629   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:16:59.621250   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:16:59.621263   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:16:59.636533   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:59.636543   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:59.661154   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:16:59.661162   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:16:59.673149   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:16:59.673160   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:16:59.684672   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:59.684685   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:59.720445   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:16:59.720453   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:16:59.732837   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:16:59.732850   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:16:59.750241   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:16:59.750253   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:02.264269   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:00.691963   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:00.692008   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:07.266524   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:07.266650   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:07.278740   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:07.278813   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:07.289707   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:07.289772   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:07.300561   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:07.300640   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:07.311996   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:07.312062   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:07.322766   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:07.322834   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:07.333465   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:07.333540   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:07.343334   13130 logs.go:276] 0 containers: []
	W0314 11:17:07.343350   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:07.343418   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:07.354710   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:07.354729   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:07.354734   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:07.366752   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:07.366763   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:07.378243   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:07.378253   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:07.397873   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:07.397884   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:07.435677   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:07.435688   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:07.449118   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:07.449131   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:07.464053   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:07.464066   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:07.489739   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:07.489747   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:07.506040   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:07.506052   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:07.542517   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:07.542528   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:07.547329   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:07.547337   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:07.561971   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:07.561985   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:07.573519   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:07.573534   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:07.594215   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:07.594229   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:07.606485   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:07.606499   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:05.692400   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:05.692425   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:10.120315   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:10.693171   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:10.693192   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:15.122624   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:15.122868   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:15.146422   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:15.146544   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:15.163272   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:15.163358   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:15.176549   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:15.176625   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:15.189216   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:15.189286   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:15.200315   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:15.200384   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:15.212238   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:15.212311   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:15.222358   13130 logs.go:276] 0 containers: []
	W0314 11:17:15.222369   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:15.222428   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:15.232803   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:15.232824   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:15.232830   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:15.245043   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:15.245053   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:15.256955   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:15.256964   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:15.261301   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:15.261310   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:15.295448   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:15.295459   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:15.312786   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:15.312796   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:15.324904   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:15.324916   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:15.360700   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:15.360712   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:15.375500   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:15.375510   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:15.391065   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:15.391079   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:15.407119   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:15.407130   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:15.422242   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:15.422255   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:15.437261   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:15.437272   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:15.457334   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:15.457345   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:15.468908   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:15.468922   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:17.994113   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:15.693700   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:15.693727   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:22.994765   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:22.994875   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:23.007045   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:23.007117   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:23.022284   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:23.022351   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:23.038116   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:23.038188   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:23.071305   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:23.071373   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:23.088729   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:23.088802   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:23.099722   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:23.099798   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:23.110179   13130 logs.go:276] 0 containers: []
	W0314 11:17:23.110191   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:23.110250   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:23.122400   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:23.122417   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:23.122422   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:23.159439   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:23.159448   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:23.180229   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:23.180241   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:23.204469   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:23.204481   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:23.208795   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:23.208801   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:23.227571   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:23.227583   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:23.239333   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:23.239344   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:23.251463   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:23.251474   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:23.286608   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:23.286617   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:23.298104   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:23.298115   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:23.314801   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:23.314810   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:23.327845   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:23.327856   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:23.345298   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:23.345308   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:23.357963   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:23.357975   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:23.384768   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:23.384778   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:20.694394   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:20.694437   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:25.695322   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:25.695350   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0314 11:17:26.100938   13262 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0314 11:17:26.105250   13262 out.go:177] * Enabled addons: storage-provisioner
	I0314 11:17:25.898861   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:26.113164   13262 addons.go:505] duration metric: took 30.52289075s for enable addons: enabled=[storage-provisioner]
	I0314 11:17:30.901001   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:30.901167   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:30.917074   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:30.917154   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:30.929630   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:30.929701   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:30.944906   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:30.944973   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:30.960830   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:30.960901   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:30.971952   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:30.972020   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:30.982800   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:30.982868   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:30.992345   13130 logs.go:276] 0 containers: []
	W0314 11:17:30.992355   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:30.992414   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:31.002959   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:31.002980   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:31.002986   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:31.037420   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:31.037432   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:31.049075   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:31.049087   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:31.060177   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:31.060189   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:31.093635   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:31.093643   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:31.111283   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:31.111297   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:31.123366   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:31.123377   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:31.135371   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:31.135383   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:31.147593   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:31.147603   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:31.159711   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:31.159723   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:31.164297   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:31.164304   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:31.178454   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:31.178464   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:31.189785   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:31.189793   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:31.208487   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:31.208498   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:31.226876   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:31.226886   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:33.753588   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:30.696792   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:30.696867   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:38.755794   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:38.755917   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:38.767074   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:38.767157   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:38.778526   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:38.778596   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:38.789466   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:38.789538   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:38.799716   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:38.799790   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:38.810519   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:38.810592   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:38.820717   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:38.820786   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:38.831357   13130 logs.go:276] 0 containers: []
	W0314 11:17:38.831369   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:38.831427   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:38.842389   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:38.842407   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:38.842413   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:38.877046   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:38.877059   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:38.888786   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:38.888795   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:38.900824   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:38.900835   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:38.925168   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:38.925177   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:38.936825   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:38.936837   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:38.948435   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:38.948446   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:38.985543   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:38.985552   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:38.989772   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:38.989780   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:39.001959   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:39.001973   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:39.023542   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:39.023553   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:39.038215   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:39.038229   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:39.052226   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:39.052237   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:39.064483   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:39.064497   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:39.077443   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:39.077455   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:35.697560   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:35.697611   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:41.594214   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:40.698774   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:40.698828   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:46.596553   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:46.596899   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:46.631428   13130 logs.go:276] 1 containers: [f764ecfece48]
	I0314 11:17:46.631569   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:46.652809   13130 logs.go:276] 1 containers: [41bacb9f9ff3]
	I0314 11:17:46.652908   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:46.666842   13130 logs.go:276] 4 containers: [6055464981da 26e593605a81 e4efe5340121 a5edbd1e8e3a]
	I0314 11:17:46.666928   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:46.679253   13130 logs.go:276] 1 containers: [12305d6bc0e3]
	I0314 11:17:46.679327   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:46.689899   13130 logs.go:276] 1 containers: [b218570cde77]
	I0314 11:17:46.689967   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:46.700557   13130 logs.go:276] 1 containers: [c539cb8e52c8]
	I0314 11:17:46.700628   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:46.710720   13130 logs.go:276] 0 containers: []
	W0314 11:17:46.710731   13130 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:46.710792   13130 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:46.726199   13130 logs.go:276] 1 containers: [118988a93a39]
	I0314 11:17:46.726217   13130 logs.go:123] Gathering logs for kube-proxy [b218570cde77] ...
	I0314 11:17:46.726223   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b218570cde77"
	I0314 11:17:46.738844   13130 logs.go:123] Gathering logs for kube-controller-manager [c539cb8e52c8] ...
	I0314 11:17:46.738855   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c539cb8e52c8"
	I0314 11:17:46.757934   13130 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:46.757945   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:46.763341   13130 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:46.763350   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:46.798654   13130 logs.go:123] Gathering logs for coredns [e4efe5340121] ...
	I0314 11:17:46.798664   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4efe5340121"
	I0314 11:17:46.811431   13130 logs.go:123] Gathering logs for kube-scheduler [12305d6bc0e3] ...
	I0314 11:17:46.811441   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 12305d6bc0e3"
	I0314 11:17:46.827588   13130 logs.go:123] Gathering logs for kube-apiserver [f764ecfece48] ...
	I0314 11:17:46.827600   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f764ecfece48"
	I0314 11:17:46.842657   13130 logs.go:123] Gathering logs for coredns [6055464981da] ...
	I0314 11:17:46.842669   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6055464981da"
	I0314 11:17:46.861508   13130 logs.go:123] Gathering logs for coredns [26e593605a81] ...
	I0314 11:17:46.861519   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 26e593605a81"
	I0314 11:17:46.872966   13130 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:46.872976   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:46.908161   13130 logs.go:123] Gathering logs for etcd [41bacb9f9ff3] ...
	I0314 11:17:46.908170   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 41bacb9f9ff3"
	I0314 11:17:46.922137   13130 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:46.922148   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:46.946907   13130 logs.go:123] Gathering logs for container status ...
	I0314 11:17:46.946914   13130 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:46.958245   13130 logs.go:123] Gathering logs for coredns [a5edbd1e8e3a] ...
	I0314 11:17:46.958255   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a5edbd1e8e3a"
	I0314 11:17:46.969748   13130 logs.go:123] Gathering logs for storage-provisioner [118988a93a39] ...
	I0314 11:17:46.969761   13130 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 118988a93a39"
	I0314 11:17:45.700364   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:45.700392   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:49.482942   13130 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:54.485342   13130 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:54.490206   13130 out.go:177] 
	W0314 11:17:54.495169   13130 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0314 11:17:54.495183   13130 out.go:239] * 
	W0314 11:17:54.496402   13130 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:17:54.506869   13130 out.go:177] 
	I0314 11:17:50.702638   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:50.702672   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:55.704924   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:55.705083   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:55.715560   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:17:55.715636   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:55.726684   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:17:55.726754   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:55.737135   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:17:55.737208   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:55.747727   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:17:55.747800   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:55.758610   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:17:55.758679   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:55.769857   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:17:55.769938   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:55.781600   13262 logs.go:276] 0 containers: []
	W0314 11:17:55.781614   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:55.781674   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:55.799417   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:17:55.799434   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:17:55.799440   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:17:55.815446   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:17:55.815458   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:17:55.827126   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:55.827139   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:55.852114   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:55.852127   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:55.882708   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:55.882716   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:55.887126   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:55.887132   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:55.923433   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:17:55.923447   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:17:55.938227   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:17:55.938238   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:17:55.949667   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:17:55.949679   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:55.961251   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:17:55.961262   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:17:55.975862   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:17:55.975872   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:17:55.987316   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:17:55.987326   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:17:55.999263   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:17:55.999274   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:17:58.520307   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:18:03.522958   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:18:03.523217   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:18:03.545131   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:18:03.545263   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:18:03.560409   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:18:03.560483   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:18:03.573049   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:18:03.573124   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:18:03.583554   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:18:03.583618   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:18:03.594272   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:18:03.594341   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:18:03.604377   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:18:03.604447   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:18:03.614465   13262 logs.go:276] 0 containers: []
	W0314 11:18:03.614480   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:18:03.614532   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:18:03.624877   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:18:03.624893   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:18:03.624898   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:18:03.645586   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:18:03.645598   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:18:03.656730   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:18:03.656742   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:18:03.668346   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:18:03.668360   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:18:03.682899   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:18:03.682909   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:18:03.696699   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:18:03.696709   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:18:03.708251   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:18:03.708262   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:18:03.719880   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:18:03.719892   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:18:03.738147   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:18:03.738158   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:18:03.749477   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:18:03.749489   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:18:03.773985   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:18:03.773995   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:18:03.805292   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:18:03.805301   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:18:03.809336   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:18:03.809342   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-03-14 18:08:48 UTC, ends at Thu 2024-03-14 18:18:10 UTC. --
	Mar 14 18:17:55 running-upgrade-636000 dockerd[3212]: time="2024-03-14T18:17:55.636146602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Mar 14 18:17:55 running-upgrade-636000 dockerd[3212]: time="2024-03-14T18:17:55.636198310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 14 18:17:55 running-upgrade-636000 dockerd[3212]: time="2024-03-14T18:17:55.636209310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 14 18:17:55 running-upgrade-636000 dockerd[3212]: time="2024-03-14T18:17:55.636263935Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4c6fe9be308433ee9e98c88efe79386c958ddf7e841bb80b7e9366934b1312a7 pid=18628 runtime=io.containerd.runc.v2
	Mar 14 18:17:56 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:17:56Z" level=error msg="ContainerStats resp: {0x40008d9280 linux}"
	Mar 14 18:17:57 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:17:57Z" level=error msg="ContainerStats resp: {0x40003af780 linux}"
	Mar 14 18:17:57 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:17:57Z" level=error msg="ContainerStats resp: {0x400018bdc0 linux}"
	Mar 14 18:17:57 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:17:57Z" level=error msg="ContainerStats resp: {0x40005a5c00 linux}"
	Mar 14 18:17:57 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:17:57Z" level=error msg="ContainerStats resp: {0x40000a4800 linux}"
	Mar 14 18:17:57 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:17:57Z" level=error msg="ContainerStats resp: {0x40004dc780 linux}"
	Mar 14 18:17:57 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:17:57Z" level=error msg="ContainerStats resp: {0x40006044c0 linux}"
	Mar 14 18:17:57 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:17:57Z" level=error msg="ContainerStats resp: {0x4000604a00 linux}"
	Mar 14 18:18:00 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:00Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 14 18:18:05 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:05Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 14 18:18:07 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:07Z" level=error msg="ContainerStats resp: {0x40008d9c80 linux}"
	Mar 14 18:18:07 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:07Z" level=error msg="ContainerStats resp: {0x40005a4180 linux}"
	Mar 14 18:18:08 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:08Z" level=error msg="ContainerStats resp: {0x40004dcb40 linux}"
	Mar 14 18:18:09 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:09Z" level=error msg="ContainerStats resp: {0x40005a54c0 linux}"
	Mar 14 18:18:09 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:09Z" level=error msg="ContainerStats resp: {0x400018b3c0 linux}"
	Mar 14 18:18:09 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:09Z" level=error msg="ContainerStats resp: {0x40005a5d80 linux}"
	Mar 14 18:18:09 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:09Z" level=error msg="ContainerStats resp: {0x4000994100 linux}"
	Mar 14 18:18:09 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:09Z" level=error msg="ContainerStats resp: {0x4000994600 linux}"
	Mar 14 18:18:09 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:09Z" level=error msg="ContainerStats resp: {0x4000994c40 linux}"
	Mar 14 18:18:09 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:09Z" level=error msg="ContainerStats resp: {0x4000995140 linux}"
	Mar 14 18:18:10 running-upgrade-636000 cri-dockerd[3054]: time="2024-03-14T18:18:10Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	bb7c1e84e3f59       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   5046e020ba211
	4c6fe9be30843       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   9fbd7c9e00263
	6055464981daa       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5046e020ba211
	26e593605a815       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   9fbd7c9e00263
	b218570cde776       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   150b593cfe731
	118988a93a392       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   aea791aaf2593
	12305d6bc0e3b       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   cd87388c5cef2
	41bacb9f9ff38       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   8a01f8ff6f0ab
	f764ecfece484       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   4e3a4c14c4717
	c539cb8e52c83       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   0899c8b66a504
	
	
	==> coredns [26e593605a81] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7811621377654809942.6358677213935367958. HINFO: read udp 10.244.0.2:44749->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7811621377654809942.6358677213935367958. HINFO: read udp 10.244.0.2:46262->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7811621377654809942.6358677213935367958. HINFO: read udp 10.244.0.2:33021->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7811621377654809942.6358677213935367958. HINFO: read udp 10.244.0.2:49263->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7811621377654809942.6358677213935367958. HINFO: read udp 10.244.0.2:34251->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7811621377654809942.6358677213935367958. HINFO: read udp 10.244.0.2:39714->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7811621377654809942.6358677213935367958. HINFO: read udp 10.244.0.2:36933->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7811621377654809942.6358677213935367958. HINFO: read udp 10.244.0.2:33177->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7811621377654809942.6358677213935367958. HINFO: read udp 10.244.0.2:45194->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7811621377654809942.6358677213935367958. HINFO: read udp 10.244.0.2:60015->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4c6fe9be3084] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6447706188753683266.2887458007392346981. HINFO: read udp 10.244.0.2:36215->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6447706188753683266.2887458007392346981. HINFO: read udp 10.244.0.2:41374->10.0.2.3:53: i/o timeout
	
	
	==> coredns [6055464981da] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4977374601855491567.8079049183030557666. HINFO: read udp 10.244.0.3:52139->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4977374601855491567.8079049183030557666. HINFO: read udp 10.244.0.3:50859->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4977374601855491567.8079049183030557666. HINFO: read udp 10.244.0.3:59416->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4977374601855491567.8079049183030557666. HINFO: read udp 10.244.0.3:34678->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4977374601855491567.8079049183030557666. HINFO: read udp 10.244.0.3:36369->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4977374601855491567.8079049183030557666. HINFO: read udp 10.244.0.3:38738->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4977374601855491567.8079049183030557666. HINFO: read udp 10.244.0.3:43748->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4977374601855491567.8079049183030557666. HINFO: read udp 10.244.0.3:48858->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4977374601855491567.8079049183030557666. HINFO: read udp 10.244.0.3:53910->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4977374601855491567.8079049183030557666. HINFO: read udp 10.244.0.3:59334->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bb7c1e84e3f5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8029626086603235338.8872320683679619536. HINFO: read udp 10.244.0.3:35674->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8029626086603235338.8872320683679619536. HINFO: read udp 10.244.0.3:59329->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8029626086603235338.8872320683679619536. HINFO: read udp 10.244.0.3:52078->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-636000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-636000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=running-upgrade-636000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T11_13_53_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:13:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-636000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:18:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:13:53 +0000   Thu, 14 Mar 2024 18:13:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:13:53 +0000   Thu, 14 Mar 2024 18:13:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:13:53 +0000   Thu, 14 Mar 2024 18:13:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:13:53 +0000   Thu, 14 Mar 2024 18:13:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-636000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 802bc2c276a6446686f3cfc47981a2d9
	  System UUID:                802bc2c276a6446686f3cfc47981a2d9
	  Boot ID:                    124f8e8e-30f3-46af-aa04-e818b15024d2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-d92jg                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 coredns-6d4b75cb6d-wtnv4                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m3s
	  kube-system                 etcd-running-upgrade-636000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-636000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-running-upgrade-636000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-hxt6z                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                 kube-scheduler-running-upgrade-636000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m2s   kube-proxy       
	  Normal  NodeReady                4m17s  kubelet          Node running-upgrade-636000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s  kubelet          Node running-upgrade-636000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s  kubelet          Node running-upgrade-636000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s  kubelet          Node running-upgrade-636000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m17s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m3s   node-controller  Node running-upgrade-636000 event: Registered Node running-upgrade-636000 in Controller
	
	
	==> dmesg <==
	[Mar14 18:09] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.083362] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.069729] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.140871] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091471] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.079299] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.535971] systemd-fstab-generator[1283]: Ignoring "noauto" for root device
	[ +11.154275] systemd-fstab-generator[1952]: Ignoring "noauto" for root device
	[  +3.185693] systemd-fstab-generator[2233]: Ignoring "noauto" for root device
	[  +0.129613] systemd-fstab-generator[2266]: Ignoring "noauto" for root device
	[  +0.092508] systemd-fstab-generator[2277]: Ignoring "noauto" for root device
	[  +0.089273] systemd-fstab-generator[2290]: Ignoring "noauto" for root device
	[ +13.041091] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.204060] systemd-fstab-generator[3009]: Ignoring "noauto" for root device
	[  +0.078481] systemd-fstab-generator[3022]: Ignoring "noauto" for root device
	[  +0.087792] systemd-fstab-generator[3033]: Ignoring "noauto" for root device
	[  +0.092098] systemd-fstab-generator[3047]: Ignoring "noauto" for root device
	[  +2.395441] systemd-fstab-generator[3199]: Ignoring "noauto" for root device
	[  +5.316572] systemd-fstab-generator[3593]: Ignoring "noauto" for root device
	[  +1.321394] systemd-fstab-generator[3859]: Ignoring "noauto" for root device
	[Mar14 18:10] kauditd_printk_skb: 68 callbacks suppressed
	[Mar14 18:13] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.339828] systemd-fstab-generator[11912]: Ignoring "noauto" for root device
	[  +5.131929] systemd-fstab-generator[12496]: Ignoring "noauto" for root device
	[  +0.475400] systemd-fstab-generator[12627]: Ignoring "noauto" for root device
	
	
	==> etcd [41bacb9f9ff3] <==
	{"level":"info","ts":"2024-03-14T18:13:49.530Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T18:13:49.530Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T18:13:49.531Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-14T18:13:49.531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-14T18:13:49.531Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-14T18:13:49.531Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-14T18:13:49.531Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-14T18:13:49.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-14T18:13:49.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-14T18:13:49.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-14T18:13:49.586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-14T18:13:49.587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-14T18:13:49.587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-14T18:13:49.587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-14T18:13:49.587Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:13:49.592Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:13:49.592Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:13:49.592Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:13:49.592Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-636000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T18:13:49.592Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:13:49.592Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:13:49.592Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-14T18:13:49.594Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T18:13:49.594Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T18:13:49.595Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:18:10 up 9 min,  0 users,  load average: 0.20, 0.25, 0.15
	Linux running-upgrade-636000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f764ecfece48] <==
	I0314 18:13:51.054647       1 cache.go:39] Caches are synced for autoregister controller
	I0314 18:13:51.061177       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0314 18:13:51.063828       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0314 18:13:51.064146       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 18:13:51.064284       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 18:13:51.072950       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0314 18:13:51.081972       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0314 18:13:51.781688       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0314 18:13:51.950719       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0314 18:13:51.952401       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0314 18:13:51.952592       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 18:13:52.111629       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 18:13:52.124021       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 18:13:52.197874       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0314 18:13:52.199874       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0314 18:13:52.200233       1 controller.go:611] quota admission added evaluator for: endpoints
	I0314 18:13:52.201485       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 18:13:53.074353       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0314 18:13:53.376850       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0314 18:13:53.380137       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0314 18:13:53.385340       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0314 18:13:53.424976       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 18:14:07.343814       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0314 18:14:07.484058       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0314 18:14:08.469029       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [c539cb8e52c8] <==
	I0314 18:14:07.325445       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0314 18:14:07.325652       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0314 18:14:07.325831       1 event.go:294] "Event occurred" object="running-upgrade-636000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-636000 event: Registered Node running-upgrade-636000 in Controller"
	I0314 18:14:07.325968       1 shared_informer.go:262] Caches are synced for ephemeral
	I0314 18:14:07.326399       1 shared_informer.go:262] Caches are synced for expand
	I0314 18:14:07.341570       1 shared_informer.go:262] Caches are synced for node
	I0314 18:14:07.341764       1 range_allocator.go:173] Starting range CIDR allocator
	I0314 18:14:07.341792       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0314 18:14:07.341811       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0314 18:14:07.352528       1 range_allocator.go:374] Set node running-upgrade-636000 PodCIDR to [10.244.0.0/24]
	I0314 18:14:07.352755       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hxt6z"
	I0314 18:14:07.374249       1 shared_informer.go:262] Caches are synced for endpoint
	I0314 18:14:07.425369       1 shared_informer.go:262] Caches are synced for attach detach
	I0314 18:14:07.425483       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0314 18:14:07.480166       1 shared_informer.go:262] Caches are synced for deployment
	I0314 18:14:07.485215       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0314 18:14:07.492024       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-wtnv4"
	I0314 18:14:07.499882       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-d92jg"
	I0314 18:14:07.503110       1 shared_informer.go:262] Caches are synced for resource quota
	I0314 18:14:07.542587       1 shared_informer.go:262] Caches are synced for resource quota
	I0314 18:14:07.574482       1 shared_informer.go:262] Caches are synced for disruption
	I0314 18:14:07.574493       1 disruption.go:371] Sending events to api server.
	I0314 18:14:07.958363       1 shared_informer.go:262] Caches are synced for garbage collector
	I0314 18:14:08.014171       1 shared_informer.go:262] Caches are synced for garbage collector
	I0314 18:14:08.014181       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [b218570cde77] <==
	I0314 18:14:08.453065       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0314 18:14:08.453093       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0314 18:14:08.453122       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0314 18:14:08.466784       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0314 18:14:08.466796       1 server_others.go:206] "Using iptables Proxier"
	I0314 18:14:08.466923       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0314 18:14:08.467067       1 server.go:661] "Version info" version="v1.24.1"
	I0314 18:14:08.467103       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:14:08.467451       1 config.go:317] "Starting service config controller"
	I0314 18:14:08.467505       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0314 18:14:08.467531       1 config.go:226] "Starting endpoint slice config controller"
	I0314 18:14:08.467550       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0314 18:14:08.467967       1 config.go:444] "Starting node config controller"
	I0314 18:14:08.467991       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0314 18:14:08.570504       1 shared_informer.go:262] Caches are synced for node config
	I0314 18:14:08.570524       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0314 18:14:08.570505       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [12305d6bc0e3] <==
	W0314 18:13:50.994231       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 18:13:50.994264       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 18:13:50.994300       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 18:13:50.994317       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 18:13:50.994386       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 18:13:50.994411       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 18:13:50.994604       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 18:13:50.994615       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 18:13:51.860854       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 18:13:51.860947       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 18:13:51.889258       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 18:13:51.889268       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 18:13:51.890620       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 18:13:51.890630       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 18:13:51.892980       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 18:13:51.893015       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 18:13:51.925523       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 18:13:51.925545       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 18:13:51.928922       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 18:13:51.928933       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 18:13:52.009770       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 18:13:52.009808       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 18:13:52.073464       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 18:13:52.073556       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 18:13:52.388473       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-03-14 18:08:48 UTC, ends at Thu 2024-03-14 18:18:10 UTC. --
	Mar 14 18:13:54 running-upgrade-636000 kubelet[12502]: I0314 18:13:54.413977   12502 apiserver.go:52] "Watching apiserver"
	Mar 14 18:13:54 running-upgrade-636000 kubelet[12502]: I0314 18:13:54.834383   12502 reconciler.go:157] "Reconciler: start to sync state"
	Mar 14 18:13:55 running-upgrade-636000 kubelet[12502]: E0314 18:13:55.005601   12502 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-636000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-636000"
	Mar 14 18:13:55 running-upgrade-636000 kubelet[12502]: E0314 18:13:55.205054   12502 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-636000\" already exists" pod="kube-system/etcd-running-upgrade-636000"
	Mar 14 18:13:55 running-upgrade-636000 kubelet[12502]: E0314 18:13:55.405366   12502 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-636000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-636000"
	Mar 14 18:13:55 running-upgrade-636000 kubelet[12502]: I0314 18:13:55.602093   12502 request.go:601] Waited for 1.121999726s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Mar 14 18:13:55 running-upgrade-636000 kubelet[12502]: E0314 18:13:55.604579   12502 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-636000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-636000"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.332271   12502 topology_manager.go:200] "Topology Admit Handler"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.334301   12502 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/33fc0b97-6903-4c86-a7c5-a5ee4983376c-tmp\") pod \"storage-provisioner\" (UID: \"33fc0b97-6903-4c86-a7c5-a5ee4983376c\") " pod="kube-system/storage-provisioner"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.334387   12502 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq2jk\" (UniqueName: \"kubernetes.io/projected/33fc0b97-6903-4c86-a7c5-a5ee4983376c-kube-api-access-wq2jk\") pod \"storage-provisioner\" (UID: \"33fc0b97-6903-4c86-a7c5-a5ee4983376c\") " pod="kube-system/storage-provisioner"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.354989   12502 topology_manager.go:200] "Topology Admit Handler"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.433781   12502 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.434138   12502 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.499419   12502 topology_manager.go:200] "Topology Admit Handler"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.503941   12502 topology_manager.go:200] "Topology Admit Handler"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.534860   12502 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7ccc\" (UniqueName: \"kubernetes.io/projected/e971f37a-6caa-45a1-9541-da450e9ebeb6-kube-api-access-g7ccc\") pod \"kube-proxy-hxt6z\" (UID: \"e971f37a-6caa-45a1-9541-da450e9ebeb6\") " pod="kube-system/kube-proxy-hxt6z"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.534882   12502 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e971f37a-6caa-45a1-9541-da450e9ebeb6-xtables-lock\") pod \"kube-proxy-hxt6z\" (UID: \"e971f37a-6caa-45a1-9541-da450e9ebeb6\") " pod="kube-system/kube-proxy-hxt6z"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.534901   12502 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e971f37a-6caa-45a1-9541-da450e9ebeb6-kube-proxy\") pod \"kube-proxy-hxt6z\" (UID: \"e971f37a-6caa-45a1-9541-da450e9ebeb6\") " pod="kube-system/kube-proxy-hxt6z"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.534911   12502 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e971f37a-6caa-45a1-9541-da450e9ebeb6-lib-modules\") pod \"kube-proxy-hxt6z\" (UID: \"e971f37a-6caa-45a1-9541-da450e9ebeb6\") " pod="kube-system/kube-proxy-hxt6z"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.635108   12502 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmhsp\" (UniqueName: \"kubernetes.io/projected/2c721e99-80f6-47b1-8a27-05c043e723be-kube-api-access-cmhsp\") pod \"coredns-6d4b75cb6d-wtnv4\" (UID: \"2c721e99-80f6-47b1-8a27-05c043e723be\") " pod="kube-system/coredns-6d4b75cb6d-wtnv4"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.635249   12502 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wsqd\" (UniqueName: \"kubernetes.io/projected/831153fa-e696-458b-bcca-a1ad1315f10c-kube-api-access-8wsqd\") pod \"coredns-6d4b75cb6d-d92jg\" (UID: \"831153fa-e696-458b-bcca-a1ad1315f10c\") " pod="kube-system/coredns-6d4b75cb6d-d92jg"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.635279   12502 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c721e99-80f6-47b1-8a27-05c043e723be-config-volume\") pod \"coredns-6d4b75cb6d-wtnv4\" (UID: \"2c721e99-80f6-47b1-8a27-05c043e723be\") " pod="kube-system/coredns-6d4b75cb6d-wtnv4"
	Mar 14 18:14:07 running-upgrade-636000 kubelet[12502]: I0314 18:14:07.635289   12502 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/831153fa-e696-458b-bcca-a1ad1315f10c-config-volume\") pod \"coredns-6d4b75cb6d-d92jg\" (UID: \"831153fa-e696-458b-bcca-a1ad1315f10c\") " pod="kube-system/coredns-6d4b75cb6d-d92jg"
	Mar 14 18:17:55 running-upgrade-636000 kubelet[12502]: I0314 18:17:55.840305   12502 scope.go:110] "RemoveContainer" containerID="a5edbd1e8e3a3d3b886e0372f0c8befa81c77454603b1dc5cc9e0107c1c6f7bb"
	Mar 14 18:17:55 running-upgrade-636000 kubelet[12502]: I0314 18:17:55.853568   12502 scope.go:110] "RemoveContainer" containerID="e4efe53401219ed317593158362f4a3bc29caf58722fcefac1f2ef2d958bb63b"
	
	
	==> storage-provisioner [118988a93a39] <==
	I0314 18:14:07.825773       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 18:14:07.830663       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 18:14:07.830727       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 18:14:07.834692       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 18:14:07.834780       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-636000_48d6a3a6-6c54-4e10-a224-9869f68eeeef!
	I0314 18:14:07.835390       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7430bb86-03cb-4d6b-8899-65e0bd69e827", APIVersion:"v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-636000_48d6a3a6-6c54-4e10-a224-9869f68eeeef became leader
	I0314 18:14:07.935585       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-636000_48d6a3a6-6c54-4e10-a224-9869f68eeeef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-636000 -n running-upgrade-636000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-636000 -n running-upgrade-636000: exit status 2 (15.634102333s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-636000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-636000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-636000
--- FAIL: TestRunningBinaryUpgrade (633.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-023000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-023000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.037906916s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-023000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-023000" primary control-plane node in "kubernetes-upgrade-023000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-023000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:10:53.466014   13198 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:10:53.466149   13198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:10:53.466152   13198 out.go:304] Setting ErrFile to fd 2...
	I0314 11:10:53.466154   13198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:10:53.466277   13198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:10:53.467322   13198 out.go:298] Setting JSON to false
	I0314 11:10:53.483445   13198 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7825,"bootTime":1710432028,"procs":378,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:10:53.483505   13198 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:10:53.489794   13198 out.go:177] * [kubernetes-upgrade-023000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:10:53.497662   13198 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:10:53.497771   13198 notify.go:220] Checking for updates...
	I0314 11:10:53.505610   13198 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:10:53.508647   13198 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:10:53.511628   13198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:10:53.514631   13198 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:10:53.517619   13198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:10:53.521001   13198 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:10:53.521066   13198 config.go:182] Loaded profile config "running-upgrade-636000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:10:53.521111   13198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:10:53.524565   13198 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:10:53.531625   13198 start.go:297] selected driver: qemu2
	I0314 11:10:53.531631   13198 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:10:53.531636   13198 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:10:53.533913   13198 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:10:53.537542   13198 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:10:53.540646   13198 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 11:10:53.540660   13198 cni.go:84] Creating CNI manager for ""
	I0314 11:10:53.540668   13198 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0314 11:10:53.540694   13198 start.go:340] cluster config:
	{Name:kubernetes-upgrade-023000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:10:53.544781   13198 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:10:53.552578   13198 out.go:177] * Starting "kubernetes-upgrade-023000" primary control-plane node in "kubernetes-upgrade-023000" cluster
	I0314 11:10:53.556666   13198 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 11:10:53.556679   13198 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0314 11:10:53.556688   13198 cache.go:56] Caching tarball of preloaded images
	I0314 11:10:53.556735   13198 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:10:53.556741   13198 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0314 11:10:53.556784   13198 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/kubernetes-upgrade-023000/config.json ...
	I0314 11:10:53.556793   13198 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/kubernetes-upgrade-023000/config.json: {Name:mkf3033086a1959d0c8149fce6bd11169b0ade83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:10:53.556990   13198 start.go:360] acquireMachinesLock for kubernetes-upgrade-023000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:10:53.557018   13198 start.go:364] duration metric: took 22.708µs to acquireMachinesLock for "kubernetes-upgrade-023000"
	I0314 11:10:53.557030   13198 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:10:53.557057   13198 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:10:53.565639   13198 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:10:53.588603   13198 start.go:159] libmachine.API.Create for "kubernetes-upgrade-023000" (driver="qemu2")
	I0314 11:10:53.588637   13198 client.go:168] LocalClient.Create starting
	I0314 11:10:53.588701   13198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:10:53.588731   13198 main.go:141] libmachine: Decoding PEM data...
	I0314 11:10:53.588738   13198 main.go:141] libmachine: Parsing certificate...
	I0314 11:10:53.588951   13198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:10:53.588978   13198 main.go:141] libmachine: Decoding PEM data...
	I0314 11:10:53.588985   13198 main.go:141] libmachine: Parsing certificate...
	I0314 11:10:53.589306   13198 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:10:53.828025   13198 main.go:141] libmachine: Creating SSH key...
	I0314 11:10:53.946104   13198 main.go:141] libmachine: Creating Disk image...
	I0314 11:10:53.946111   13198 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:10:53.946330   13198 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2
	I0314 11:10:53.964504   13198 main.go:141] libmachine: STDOUT: 
	I0314 11:10:53.964535   13198 main.go:141] libmachine: STDERR: 
	I0314 11:10:53.964606   13198 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2 +20000M
	I0314 11:10:53.975485   13198 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:10:53.975501   13198 main.go:141] libmachine: STDERR: 
	I0314 11:10:53.975514   13198 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2
	I0314 11:10:53.975520   13198 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:10:53.975549   13198 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:99:15:d1:31:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2
	I0314 11:10:53.977471   13198 main.go:141] libmachine: STDOUT: 
	I0314 11:10:53.977486   13198 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:10:53.977508   13198 client.go:171] duration metric: took 388.873166ms to LocalClient.Create
	I0314 11:10:55.979743   13198 start.go:128] duration metric: took 2.422703417s to createHost
	I0314 11:10:55.979841   13198 start.go:83] releasing machines lock for "kubernetes-upgrade-023000", held for 2.422860416s
	W0314 11:10:55.979890   13198 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:10:55.990106   13198 out.go:177] * Deleting "kubernetes-upgrade-023000" in qemu2 ...
	W0314 11:10:56.025547   13198 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:10:56.025584   13198 start.go:728] Will try again in 5 seconds ...
	I0314 11:11:01.027646   13198 start.go:360] acquireMachinesLock for kubernetes-upgrade-023000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:11:01.028089   13198 start.go:364] duration metric: took 365.167µs to acquireMachinesLock for "kubernetes-upgrade-023000"
	I0314 11:11:01.028212   13198 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:11:01.028357   13198 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:11:01.033944   13198 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:11:01.079138   13198 start.go:159] libmachine.API.Create for "kubernetes-upgrade-023000" (driver="qemu2")
	I0314 11:11:01.079197   13198 client.go:168] LocalClient.Create starting
	I0314 11:11:01.079330   13198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:11:01.079396   13198 main.go:141] libmachine: Decoding PEM data...
	I0314 11:11:01.079409   13198 main.go:141] libmachine: Parsing certificate...
	I0314 11:11:01.079467   13198 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:11:01.079508   13198 main.go:141] libmachine: Decoding PEM data...
	I0314 11:11:01.079519   13198 main.go:141] libmachine: Parsing certificate...
	I0314 11:11:01.080075   13198 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:11:01.233452   13198 main.go:141] libmachine: Creating SSH key...
	I0314 11:11:01.394376   13198 main.go:141] libmachine: Creating Disk image...
	I0314 11:11:01.394387   13198 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:11:01.394560   13198 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2
	I0314 11:11:01.406702   13198 main.go:141] libmachine: STDOUT: 
	I0314 11:11:01.406722   13198 main.go:141] libmachine: STDERR: 
	I0314 11:11:01.406784   13198 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2 +20000M
	I0314 11:11:01.417496   13198 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:11:01.417522   13198 main.go:141] libmachine: STDERR: 
	I0314 11:11:01.417539   13198 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2
	I0314 11:11:01.417543   13198 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:11:01.417585   13198 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:30:04:ca:e8:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2
	I0314 11:11:01.419326   13198 main.go:141] libmachine: STDOUT: 
	I0314 11:11:01.419346   13198 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:11:01.419362   13198 client.go:171] duration metric: took 340.166958ms to LocalClient.Create
	I0314 11:11:03.421542   13198 start.go:128] duration metric: took 2.393181875s to createHost
	I0314 11:11:03.421602   13198 start.go:83] releasing machines lock for "kubernetes-upgrade-023000", held for 2.393544s
	W0314 11:11:03.421808   13198 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-023000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-023000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:11:03.432854   13198 out.go:177] 
	W0314 11:11:03.439988   13198 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:11:03.440009   13198 out.go:239] * 
	* 
	W0314 11:11:03.441435   13198 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:11:03.453884   13198 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-023000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-023000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-023000: (2.0462595s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-023000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-023000 status --format={{.Host}}: exit status 7 (65.176416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-023000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-023000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.184392292s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-023000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-023000" primary control-plane node in "kubernetes-upgrade-023000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-023000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:11:05.617921   13226 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:11:05.618056   13226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:11:05.618060   13226 out.go:304] Setting ErrFile to fd 2...
	I0314 11:11:05.618062   13226 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:11:05.618179   13226 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:11:05.619036   13226 out.go:298] Setting JSON to false
	I0314 11:11:05.635552   13226 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7837,"bootTime":1710432028,"procs":378,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:11:05.635630   13226 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:11:05.640439   13226 out.go:177] * [kubernetes-upgrade-023000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:11:05.647493   13226 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:11:05.651430   13226 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:11:05.647554   13226 notify.go:220] Checking for updates...
	I0314 11:11:05.657379   13226 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:11:05.660413   13226 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:11:05.661574   13226 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:11:05.664418   13226 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:11:05.667757   13226 config.go:182] Loaded profile config "kubernetes-upgrade-023000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0314 11:11:05.668015   13226 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:11:05.672312   13226 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:11:05.679471   13226 start.go:297] selected driver: qemu2
	I0314 11:11:05.679477   13226 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-023000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:11:05.679541   13226 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:11:05.681962   13226 cni.go:84] Creating CNI manager for ""
	I0314 11:11:05.681981   13226 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:11:05.682008   13226 start.go:340] cluster config:
	{Name:kubernetes-upgrade-023000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-023000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:11:05.686644   13226 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:11:05.694415   13226 out.go:177] * Starting "kubernetes-upgrade-023000" primary control-plane node in "kubernetes-upgrade-023000" cluster
	I0314 11:11:05.698475   13226 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 11:11:05.698494   13226 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0314 11:11:05.698506   13226 cache.go:56] Caching tarball of preloaded images
	I0314 11:11:05.698566   13226 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:11:05.698572   13226 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0314 11:11:05.698633   13226 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/kubernetes-upgrade-023000/config.json ...
	I0314 11:11:05.699083   13226 start.go:360] acquireMachinesLock for kubernetes-upgrade-023000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:11:05.699107   13226 start.go:364] duration metric: took 17.958µs to acquireMachinesLock for "kubernetes-upgrade-023000"
	I0314 11:11:05.699115   13226 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:11:05.699121   13226 fix.go:54] fixHost starting: 
	I0314 11:11:05.699225   13226 fix.go:112] recreateIfNeeded on kubernetes-upgrade-023000: state=Stopped err=<nil>
	W0314 11:11:05.699233   13226 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:11:05.702467   13226 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-023000" ...
	I0314 11:11:05.710414   13226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:30:04:ca:e8:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2
	I0314 11:11:05.712488   13226 main.go:141] libmachine: STDOUT: 
	I0314 11:11:05.712509   13226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:11:05.712538   13226 fix.go:56] duration metric: took 13.417042ms for fixHost
	I0314 11:11:05.712543   13226 start.go:83] releasing machines lock for "kubernetes-upgrade-023000", held for 13.432958ms
	W0314 11:11:05.712549   13226 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:11:05.712588   13226 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:11:05.712593   13226 start.go:728] Will try again in 5 seconds ...
	I0314 11:11:10.714678   13226 start.go:360] acquireMachinesLock for kubernetes-upgrade-023000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:11:10.714989   13226 start.go:364] duration metric: took 234.167µs to acquireMachinesLock for "kubernetes-upgrade-023000"
	I0314 11:11:10.715115   13226 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:11:10.715127   13226 fix.go:54] fixHost starting: 
	I0314 11:11:10.715567   13226 fix.go:112] recreateIfNeeded on kubernetes-upgrade-023000: state=Stopped err=<nil>
	W0314 11:11:10.715583   13226 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:11:10.723883   13226 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-023000" ...
	I0314 11:11:10.728073   13226 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:30:04:ca:e8:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubernetes-upgrade-023000/disk.qcow2
	I0314 11:11:10.737969   13226 main.go:141] libmachine: STDOUT: 
	I0314 11:11:10.738074   13226 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:11:10.738254   13226 fix.go:56] duration metric: took 23.126083ms for fixHost
	I0314 11:11:10.738280   13226 start.go:83] releasing machines lock for "kubernetes-upgrade-023000", held for 23.274709ms
	W0314 11:11:10.738495   13226 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-023000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:11:10.746880   13226 out.go:177] 
	W0314 11:11:10.750056   13226 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:11:10.750094   13226 out.go:239] * 
	* 
	W0314 11:11:10.752148   13226 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:11:10.757949   13226 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-023000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-023000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-023000 version --output=json: exit status 1 (117.143958ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-023000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-14 11:11:10.889399 -0700 PDT m=+953.405607335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-023000 -n kubernetes-upgrade-023000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-023000 -n kubernetes-upgrade-023000: exit status 7 (31.310833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-023000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-023000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-023000
--- FAIL: TestKubernetesUpgrade (17.59s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.72s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18384
- KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4173299768/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.72s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.52s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18384
- KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current363410744/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (579.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1422340937 start -p stopped-upgrade-157000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1422340937 start -p stopped-upgrade-157000 --memory=2200 --vm-driver=qemu2 : (46.688512334s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1422340937 -p stopped-upgrade-157000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1422340937 -p stopped-upgrade-157000 stop: (12.109086291s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-157000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-157000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m40.970582916s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-157000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-157000" primary control-plane node in "stopped-upgrade-157000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-157000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:12:15.411443   13262 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:12:15.411583   13262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:12:15.411587   13262 out.go:304] Setting ErrFile to fd 2...
	I0314 11:12:15.411589   13262 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:12:15.411720   13262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:12:15.412758   13262 out.go:298] Setting JSON to false
	I0314 11:12:15.430321   13262 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7907,"bootTime":1710432028,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:12:15.430386   13262 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:12:15.434821   13262 out.go:177] * [stopped-upgrade-157000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:12:15.442904   13262 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:12:15.442971   13262 notify.go:220] Checking for updates...
	I0314 11:12:15.450816   13262 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:12:15.452291   13262 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:12:15.455781   13262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:12:15.458795   13262 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:12:15.461859   13262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:12:15.465076   13262 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:12:15.468745   13262 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 11:12:15.471795   13262 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:12:15.475695   13262 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:12:15.482823   13262 start.go:297] selected driver: qemu2
	I0314 11:12:15.482829   13262 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52332 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0314 11:12:15.482896   13262 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:12:15.485618   13262 cni.go:84] Creating CNI manager for ""
	I0314 11:12:15.485631   13262 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:12:15.485656   13262 start.go:340] cluster config:
	{Name:stopped-upgrade-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52332 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0314 11:12:15.485705   13262 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:12:15.493836   13262 out.go:177] * Starting "stopped-upgrade-157000" primary control-plane node in "stopped-upgrade-157000" cluster
	I0314 11:12:15.497790   13262 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0314 11:12:15.497807   13262 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0314 11:12:15.497817   13262 cache.go:56] Caching tarball of preloaded images
	I0314 11:12:15.497871   13262 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:12:15.497876   13262 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0314 11:12:15.497931   13262 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/config.json ...
	I0314 11:12:15.498386   13262 start.go:360] acquireMachinesLock for stopped-upgrade-157000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:12:15.498416   13262 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "stopped-upgrade-157000"
	I0314 11:12:15.498425   13262 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:12:15.498429   13262 fix.go:54] fixHost starting: 
	I0314 11:12:15.498525   13262 fix.go:112] recreateIfNeeded on stopped-upgrade-157000: state=Stopped err=<nil>
	W0314 11:12:15.498535   13262 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:12:15.502838   13262 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-157000" ...
	I0314 11:12:15.510856   13262 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52297-:22,hostfwd=tcp::52298-:2376,hostname=stopped-upgrade-157000 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/disk.qcow2
	I0314 11:12:15.558810   13262 main.go:141] libmachine: STDOUT: 
	I0314 11:12:15.558838   13262 main.go:141] libmachine: STDERR: 
	I0314 11:12:15.558844   13262 main.go:141] libmachine: Waiting for VM to start (ssh -p 52297 docker@127.0.0.1)...
	I0314 11:12:35.369452   13262 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/config.json ...
	I0314 11:12:35.369661   13262 machine.go:94] provisionDockerMachine start ...
	I0314 11:12:35.369707   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.369834   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.369838   13262 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 11:12:35.439413   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 11:12:35.439427   13262 buildroot.go:166] provisioning hostname "stopped-upgrade-157000"
	I0314 11:12:35.439494   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.439614   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.439624   13262 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-157000 && echo "stopped-upgrade-157000" | sudo tee /etc/hostname
	I0314 11:12:35.513383   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-157000
	
	I0314 11:12:35.513449   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.513576   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.513586   13262 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-157000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-157000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-157000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 11:12:35.585623   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 11:12:35.585637   13262 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18384-10823/.minikube CaCertPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18384-10823/.minikube}
	I0314 11:12:35.585647   13262 buildroot.go:174] setting up certificates
	I0314 11:12:35.585652   13262 provision.go:84] configureAuth start
	I0314 11:12:35.585660   13262 provision.go:143] copyHostCerts
	I0314 11:12:35.585745   13262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.pem, removing ...
	I0314 11:12:35.585752   13262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.pem
	I0314 11:12:35.586538   13262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.pem (1082 bytes)
	I0314 11:12:35.586694   13262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18384-10823/.minikube/cert.pem, removing ...
	I0314 11:12:35.586698   13262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18384-10823/.minikube/cert.pem
	I0314 11:12:35.586747   13262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18384-10823/.minikube/cert.pem (1123 bytes)
	I0314 11:12:35.586851   13262 exec_runner.go:144] found /Users/jenkins/minikube-integration/18384-10823/.minikube/key.pem, removing ...
	I0314 11:12:35.586854   13262 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18384-10823/.minikube/key.pem
	I0314 11:12:35.586896   13262 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18384-10823/.minikube/key.pem (1675 bytes)
	I0314 11:12:35.586974   13262 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-157000 san=[127.0.0.1 localhost minikube stopped-upgrade-157000]
	I0314 11:12:35.701532   13262 provision.go:177] copyRemoteCerts
	I0314 11:12:35.701568   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 11:12:35.701577   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	I0314 11:12:35.738247   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0314 11:12:35.745299   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 11:12:35.752080   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 11:12:35.759024   13262 provision.go:87] duration metric: took 173.365917ms to configureAuth
	I0314 11:12:35.759034   13262 buildroot.go:189] setting minikube options for container-runtime
	I0314 11:12:35.759148   13262 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:12:35.759189   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.759289   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.759294   13262 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0314 11:12:35.827330   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0314 11:12:35.827342   13262 buildroot.go:70] root file system type: tmpfs
	I0314 11:12:35.827392   13262 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0314 11:12:35.827436   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.827535   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.827567   13262 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0314 11:12:35.897884   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0314 11:12:35.897940   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:35.898062   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:35.898070   13262 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0314 11:12:36.249445   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0314 11:12:36.249466   13262 machine.go:97] duration metric: took 879.816ms to provisionDockerMachine
	I0314 11:12:36.249477   13262 start.go:293] postStartSetup for "stopped-upgrade-157000" (driver="qemu2")
	I0314 11:12:36.249483   13262 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 11:12:36.249559   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 11:12:36.249570   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	I0314 11:12:36.286391   13262 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 11:12:36.287687   13262 info.go:137] Remote host: Buildroot 2021.02.12
	I0314 11:12:36.287695   13262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18384-10823/.minikube/addons for local assets ...
	I0314 11:12:36.287763   13262 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18384-10823/.minikube/files for local assets ...
	I0314 11:12:36.287882   13262 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem -> 112382.pem in /etc/ssl/certs
	I0314 11:12:36.288012   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 11:12:36.290701   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem --> /etc/ssl/certs/112382.pem (1708 bytes)
	I0314 11:12:36.297579   13262 start.go:296] duration metric: took 48.098333ms for postStartSetup
	I0314 11:12:36.297593   13262 fix.go:56] duration metric: took 20.799555583s for fixHost
	I0314 11:12:36.297633   13262 main.go:141] libmachine: Using SSH client type: native
	I0314 11:12:36.297733   13262 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104ae9bf0] 0x104aec450 <nil>  [] 0s} localhost 52297 <nil> <nil>}
	I0314 11:12:36.297738   13262 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 11:12:36.367100   13262 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710439956.736506504
	
	I0314 11:12:36.367109   13262 fix.go:216] guest clock: 1710439956.736506504
	I0314 11:12:36.367113   13262 fix.go:229] Guest: 2024-03-14 11:12:36.736506504 -0700 PDT Remote: 2024-03-14 11:12:36.297594 -0700 PDT m=+20.917985459 (delta=438.912504ms)
	I0314 11:12:36.367124   13262 fix.go:200] guest clock delta is within tolerance: 438.912504ms
	I0314 11:12:36.367127   13262 start.go:83] releasing machines lock for "stopped-upgrade-157000", held for 20.869099083s
	I0314 11:12:36.367196   13262 ssh_runner.go:195] Run: cat /version.json
	I0314 11:12:36.367198   13262 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 11:12:36.367205   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	I0314 11:12:36.367212   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	W0314 11:12:36.367811   13262 sshutil.go:64] dial failure (will retry): dial tcp [::1]:52297: connect: connection refused
	I0314 11:12:36.367835   13262 retry.go:31] will retry after 249.00188ms: dial tcp [::1]:52297: connect: connection refused
	W0314 11:12:36.401338   13262 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0314 11:12:36.401390   13262 ssh_runner.go:195] Run: systemctl --version
	I0314 11:12:36.403060   13262 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 11:12:36.404798   13262 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 11:12:36.404822   13262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0314 11:12:36.407512   13262 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0314 11:12:36.412583   13262 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 11:12:36.412592   13262 start.go:494] detecting cgroup driver to use...
	I0314 11:12:36.412660   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 11:12:36.419547   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0314 11:12:36.423237   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0314 11:12:36.426483   13262 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0314 11:12:36.426508   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0314 11:12:36.429924   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 11:12:36.432853   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0314 11:12:36.435575   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0314 11:12:36.438834   13262 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 11:12:36.442299   13262 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0314 11:12:36.445506   13262 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 11:12:36.448079   13262 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 11:12:36.450997   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:36.514561   13262 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0314 11:12:36.523004   13262 start.go:494] detecting cgroup driver to use...
	I0314 11:12:36.523094   13262 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0314 11:12:36.531400   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 11:12:36.536015   13262 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 11:12:36.549944   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 11:12:36.555968   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 11:12:36.562789   13262 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0314 11:12:36.601990   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0314 11:12:36.606825   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 11:12:36.611959   13262 ssh_runner.go:195] Run: which cri-dockerd
	I0314 11:12:36.613211   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0314 11:12:36.615601   13262 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0314 11:12:36.620675   13262 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0314 11:12:36.688261   13262 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0314 11:12:36.751298   13262 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0314 11:12:36.751369   13262 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0314 11:12:36.758638   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:36.827593   13262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 11:12:37.949413   13262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.121824583s)
	I0314 11:12:37.949482   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0314 11:12:37.954513   13262 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0314 11:12:37.960682   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 11:12:37.965292   13262 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0314 11:12:38.029504   13262 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0314 11:12:38.090803   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:38.154234   13262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0314 11:12:38.160336   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0314 11:12:38.164887   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:38.231058   13262 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0314 11:12:38.276313   13262 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0314 11:12:38.276388   13262 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0314 11:12:38.278908   13262 start.go:562] Will wait 60s for crictl version
	I0314 11:12:38.278970   13262 ssh_runner.go:195] Run: which crictl
	I0314 11:12:38.280277   13262 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 11:12:38.295680   13262 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0314 11:12:38.295743   13262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 11:12:38.319713   13262 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0314 11:12:38.339982   13262 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0314 11:12:38.340047   13262 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0314 11:12:38.341361   13262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 11:12:38.345356   13262 kubeadm.go:877] updating cluster {Name:stopped-upgrade-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52332 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0314 11:12:38.345403   13262 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0314 11:12:38.345446   13262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 11:12:38.356007   13262 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 11:12:38.356017   13262 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0314 11:12:38.356062   13262 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 11:12:38.359304   13262 ssh_runner.go:195] Run: which lz4
	I0314 11:12:38.360641   13262 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0314 11:12:38.361891   13262 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 11:12:38.361906   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0314 11:12:39.087859   13262 docker.go:649] duration metric: took 727.259542ms to copy over tarball
	I0314 11:12:39.087918   13262 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 11:12:40.412681   13262 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.324763333s)
	I0314 11:12:40.423763   13262 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 11:12:40.442045   13262 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0314 11:12:40.445602   13262 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0314 11:12:40.451056   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:40.508887   13262 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0314 11:12:42.069086   13262 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.560210125s)
	I0314 11:12:42.069184   13262 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0314 11:12:42.084820   13262 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0314 11:12:42.084830   13262 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0314 11:12:42.084836   13262 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 11:12:42.093918   13262 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0314 11:12:42.093973   13262 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:12:42.094020   13262 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:12:42.094086   13262 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:12:42.094331   13262 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0314 11:12:42.094435   13262 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:12:42.094769   13262 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:12:42.094867   13262 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:12:42.103050   13262 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:12:42.104352   13262 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0314 11:12:42.104699   13262 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:12:42.104765   13262 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:12:42.104871   13262 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:12:42.104882   13262 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0314 11:12:42.104918   13262 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:12:42.104944   13262 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:12:44.082498   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:12:44.114834   13262 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0314 11:12:44.114882   13262 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:12:44.114982   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0314 11:12:44.132783   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0314 11:12:44.148046   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:12:44.162929   13262 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0314 11:12:44.162951   13262 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:12:44.163011   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0314 11:12:44.174611   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0314 11:12:44.193295   13262 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0314 11:12:44.193424   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:12:44.203974   13262 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0314 11:12:44.203992   13262 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:12:44.204046   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0314 11:12:44.213673   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0314 11:12:44.213785   13262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0314 11:12:44.216074   13262 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0314 11:12:44.216089   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0314 11:12:44.220142   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:12:44.222432   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0314 11:12:44.229559   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0314 11:12:44.242350   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:12:44.249328   13262 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0314 11:12:44.249357   13262 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:12:44.249414   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0314 11:12:44.265678   13262 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0314 11:12:44.265693   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0314 11:12:44.277161   13262 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0314 11:12:44.277184   13262 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0314 11:12:44.277195   13262 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0314 11:12:44.277205   13262 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0314 11:12:44.277239   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0314 11:12:44.277240   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0314 11:12:44.277282   13262 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0314 11:12:44.277290   13262 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:12:44.277307   13262 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0314 11:12:44.284264   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0314 11:12:44.335016   13262 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0314 11:12:44.335067   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0314 11:12:44.335087   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0314 11:12:44.335108   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0314 11:12:44.335189   13262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0314 11:12:44.335189   13262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0314 11:12:44.336695   13262 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0314 11:12:44.336700   13262 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0314 11:12:44.336709   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0314 11:12:44.336708   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0314 11:12:44.362828   13262 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0314 11:12:44.362843   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0314 11:12:44.417810   13262 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0314 11:12:44.531804   13262 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0314 11:12:44.531819   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	W0314 11:12:44.551269   13262 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0314 11:12:44.551380   13262 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:12:44.674427   13262 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0314 11:12:44.674453   13262 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0314 11:12:44.674471   13262 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:12:44.674529   13262 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:12:44.688639   13262 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 11:12:44.688758   13262 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0314 11:12:44.690179   13262 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0314 11:12:44.690197   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0314 11:12:44.714937   13262 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 11:12:44.714953   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0314 11:12:44.948541   13262 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 11:12:44.948580   13262 cache_images.go:92] duration metric: took 2.863791375s to LoadCachedImages
	W0314 11:12:44.948616   13262 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0314 11:12:44.948622   13262 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0314 11:12:44.948674   13262 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-157000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 11:12:44.948733   13262 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0314 11:12:44.963025   13262 cni.go:84] Creating CNI manager for ""
	I0314 11:12:44.963039   13262 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:12:44.963046   13262 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 11:12:44.963054   13262 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-157000 NodeName:stopped-upgrade-157000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 11:12:44.963120   13262 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-157000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 11:12:44.963172   13262 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0314 11:12:44.966640   13262 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 11:12:44.966668   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 11:12:44.969899   13262 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0314 11:12:44.974698   13262 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 11:12:44.979552   13262 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0314 11:12:44.984970   13262 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0314 11:12:44.986180   13262 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 11:12:44.989620   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:12:45.051111   13262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 11:12:45.062745   13262 certs.go:68] Setting up /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000 for IP: 10.0.2.15
	I0314 11:12:45.062757   13262 certs.go:194] generating shared ca certs ...
	I0314 11:12:45.062766   13262 certs.go:226] acquiring lock for ca certs: {Name:mk6a5389e049f4ab73da9372eeaf63d358eca92f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:12:45.062927   13262 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.key
	I0314 11:12:45.063190   13262 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/proxy-client-ca.key
	I0314 11:12:45.063198   13262 certs.go:256] generating profile certs ...
	I0314 11:12:45.063478   13262 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/client.key
	I0314 11:12:45.063520   13262 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key.e0ee09d6
	I0314 11:12:45.063534   13262 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt.e0ee09d6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0314 11:12:45.204279   13262 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt.e0ee09d6 ...
	I0314 11:12:45.204296   13262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt.e0ee09d6: {Name:mkf5b13511b68d86a378697f3d5619901b1032a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:12:45.204606   13262 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key.e0ee09d6 ...
	I0314 11:12:45.204611   13262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key.e0ee09d6: {Name:mk1d1811403924069940736f68029fcffb7d246e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:12:45.204732   13262 certs.go:381] copying /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt.e0ee09d6 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt
	I0314 11:12:45.204917   13262 certs.go:385] copying /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key.e0ee09d6 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key
	I0314 11:12:45.205269   13262 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/proxy-client.key
	I0314 11:12:45.205467   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/11238.pem (1338 bytes)
	W0314 11:12:45.205640   13262 certs.go:480] ignoring /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/11238_empty.pem, impossibly tiny 0 bytes
	I0314 11:12:45.205646   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 11:12:45.205674   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem (1082 bytes)
	I0314 11:12:45.205708   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem (1123 bytes)
	I0314 11:12:45.205733   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/key.pem (1675 bytes)
	I0314 11:12:45.205788   13262 certs.go:484] found cert: /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem (1708 bytes)
	I0314 11:12:45.206188   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 11:12:45.213097   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 11:12:45.219714   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 11:12:45.226935   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0314 11:12:45.234414   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 11:12:45.240950   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 11:12:45.247480   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 11:12:45.254893   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 11:12:45.261461   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 11:12:45.267913   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/11238.pem --> /usr/share/ca-certificates/11238.pem (1338 bytes)
	I0314 11:12:45.274746   13262 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/ssl/certs/112382.pem --> /usr/share/ca-certificates/112382.pem (1708 bytes)
	I0314 11:12:45.281925   13262 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 11:12:45.287347   13262 ssh_runner.go:195] Run: openssl version
	I0314 11:12:45.289563   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 11:12:45.292382   13262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 11:12:45.293826   13262 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:09 /usr/share/ca-certificates/minikubeCA.pem
	I0314 11:12:45.293843   13262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 11:12:45.295800   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 11:12:45.299111   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11238.pem && ln -fs /usr/share/ca-certificates/11238.pem /etc/ssl/certs/11238.pem"
	I0314 11:12:45.302550   13262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11238.pem
	I0314 11:12:45.303986   13262 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 17:57 /usr/share/ca-certificates/11238.pem
	I0314 11:12:45.304006   13262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11238.pem
	I0314 11:12:45.305776   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11238.pem /etc/ssl/certs/51391683.0"
	I0314 11:12:45.308707   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112382.pem && ln -fs /usr/share/ca-certificates/112382.pem /etc/ssl/certs/112382.pem"
	I0314 11:12:45.311602   13262 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112382.pem
	I0314 11:12:45.313149   13262 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 17:57 /usr/share/ca-certificates/112382.pem
	I0314 11:12:45.313172   13262 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112382.pem
	I0314 11:12:45.314921   13262 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112382.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 11:12:45.319018   13262 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 11:12:45.320589   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 11:12:45.323208   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 11:12:45.325652   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 11:12:45.327916   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 11:12:45.329879   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 11:12:45.331684   13262 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 11:12:45.333563   13262 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-157000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52332 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-157000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0314 11:12:45.333630   13262 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 11:12:45.344260   13262 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 11:12:45.347428   13262 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 11:12:45.347434   13262 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 11:12:45.347437   13262 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 11:12:45.347466   13262 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 11:12:45.350273   13262 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 11:12:45.351021   13262 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-157000" does not appear in /Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:12:45.351131   13262 kubeconfig.go:62] /Users/jenkins/minikube-integration/18384-10823/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-157000" cluster setting kubeconfig missing "stopped-upgrade-157000" context setting]
	I0314 11:12:45.351331   13262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/kubeconfig: {Name:mk22117ed76e85ca64a0d4fa77d593f7fc7d1176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:12:45.351758   13262 kapi.go:59] client config for stopped-upgrade-157000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/client.key", CAFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105dd8630), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 11:12:45.352374   13262 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 11:12:45.355123   13262 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-157000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0314 11:12:45.355130   13262 kubeadm.go:1153] stopping kube-system containers ...
	I0314 11:12:45.355166   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0314 11:12:45.366347   13262 docker.go:483] Stopping containers: [82be34d06648 8e89db56a692 727ab0ab8602 7a8b7168210f c2b3b8dcaef6 425c1f709af1 aaf4ccdffb9c d8f4cbb7cd6a]
	I0314 11:12:45.366421   13262 ssh_runner.go:195] Run: docker stop 82be34d06648 8e89db56a692 727ab0ab8602 7a8b7168210f c2b3b8dcaef6 425c1f709af1 aaf4ccdffb9c d8f4cbb7cd6a
	I0314 11:12:45.377564   13262 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 11:12:45.382920   13262 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 11:12:45.386261   13262 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 11:12:45.386266   13262 kubeadm.go:156] found existing configuration files:
	
	I0314 11:12:45.386289   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/admin.conf
	I0314 11:12:45.389281   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 11:12:45.389308   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 11:12:45.391741   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/kubelet.conf
	I0314 11:12:45.394655   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 11:12:45.394678   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 11:12:45.397767   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/controller-manager.conf
	I0314 11:12:45.400205   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 11:12:45.400223   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 11:12:45.402900   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/scheduler.conf
	I0314 11:12:45.405899   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 11:12:45.405922   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 11:12:45.408693   13262 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 11:12:45.411381   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:12:45.443204   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:12:45.871194   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:12:45.990973   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:12:46.021664   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 11:12:46.049700   13262 api_server.go:52] waiting for apiserver process to appear ...
	I0314 11:12:46.049785   13262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:12:46.551923   13262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:12:47.050159   13262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:12:47.054260   13262 api_server.go:72] duration metric: took 1.004579708s to wait for apiserver process to appear ...
	I0314 11:12:47.054270   13262 api_server.go:88] waiting for apiserver healthz status ...
	I0314 11:12:47.054278   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:52.056429   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:52.056531   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:12:57.057406   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:12:57.057491   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:02.058300   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:02.058369   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:07.059744   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:07.059760   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:12.060851   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:12.060930   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:17.063088   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:17.063166   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:22.064713   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:22.064779   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:27.065663   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:27.065738   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:32.068142   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:32.068186   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:37.070406   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:37.070454   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:42.072684   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:42.072727   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:47.074845   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:47.074969   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:47.086647   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:13:47.086723   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:47.098698   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:13:47.098799   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:47.109192   13262 logs.go:276] 0 containers: []
	W0314 11:13:47.109206   13262 logs.go:278] No container was found matching "coredns"
	I0314 11:13:47.109274   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:47.120041   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:13:47.120123   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:47.130670   13262 logs.go:276] 0 containers: []
	W0314 11:13:47.130683   13262 logs.go:278] No container was found matching "kube-proxy"
	I0314 11:13:47.130750   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:47.141624   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:13:47.141698   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:47.151896   13262 logs.go:276] 0 containers: []
	W0314 11:13:47.151907   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:47.151962   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:47.167342   13262 logs.go:276] 0 containers: []
	W0314 11:13:47.167355   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:13:47.167360   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:47.167366   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:47.197474   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:47.197490   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:47.202295   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:47.202308   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:47.330000   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:13:47.330013   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:13:47.345706   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:13:47.345723   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:13:47.360383   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:13:47.360396   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:13:47.380541   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:13:47.380555   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:13:47.398987   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:47.398999   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:47.424376   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:13:47.424385   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:47.440412   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:13:47.440430   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:13:47.455182   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:13:47.455200   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:13:47.477334   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:13:47.477352   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:13:47.502963   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:13:47.502978   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:13:50.028068   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:13:55.030192   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:13:55.030324   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:13:55.041021   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:13:55.041098   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:13:55.052034   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:13:55.052103   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:13:55.062082   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:13:55.062157   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:13:55.072820   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:13:55.072904   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:13:55.083146   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:13:55.083216   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:13:55.093493   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:13:55.093566   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:13:55.103968   13262 logs.go:276] 0 containers: []
	W0314 11:13:55.103978   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:13:55.104035   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:13:55.114148   13262 logs.go:276] 0 containers: []
	W0314 11:13:55.114159   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:13:55.114166   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:13:55.114171   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:13:55.132830   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:13:55.132841   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:13:55.170242   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:13:55.170254   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:13:55.182967   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:13:55.182981   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:13:55.211099   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:13:55.211110   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:13:55.227132   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:13:55.227149   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:13:55.242803   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:13:55.242819   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:13:55.258476   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:13:55.258495   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:13:55.274025   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:13:55.274037   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:13:55.299897   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:13:55.299912   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:13:55.328613   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:13:55.328634   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:13:55.346316   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:13:55.346331   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:13:55.358340   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:13:55.358352   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:13:55.375283   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:13:55.375294   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:13:55.379345   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:13:55.379352   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:13:57.895094   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:02.897429   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:02.897662   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:02.920166   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:02.920267   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:02.935915   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:02.935993   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:02.946897   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:02.946962   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:02.957767   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:02.957849   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:02.968093   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:02.968156   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:02.978429   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:02.978503   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:02.988316   13262 logs.go:276] 0 containers: []
	W0314 11:14:02.988328   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:02.988385   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:02.998227   13262 logs.go:276] 0 containers: []
	W0314 11:14:02.998241   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:02.998249   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:02.998255   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:03.033530   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:03.033541   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:03.046459   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:03.046471   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:03.072408   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:03.072417   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:03.095517   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:03.095531   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:03.107478   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:03.107491   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:03.119505   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:03.119518   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:03.137617   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:03.137631   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:03.166179   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:03.166189   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:03.170551   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:03.170558   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:03.184381   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:03.184392   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:03.201980   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:03.201991   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:03.216028   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:03.216043   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:03.229205   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:03.229216   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:03.244278   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:03.244289   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:05.764004   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:10.766309   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:10.766497   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:10.783095   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:10.783166   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:10.796582   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:10.796669   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:10.807197   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:10.807270   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:10.818048   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:10.818126   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:10.828543   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:10.828611   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:10.839096   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:10.839168   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:10.849344   13262 logs.go:276] 0 containers: []
	W0314 11:14:10.849354   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:10.849408   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:10.859621   13262 logs.go:276] 0 containers: []
	W0314 11:14:10.859632   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:10.859643   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:10.859651   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:10.874523   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:10.874534   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:10.895051   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:10.895062   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:10.912960   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:10.912976   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:10.939058   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:10.939083   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:10.942927   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:10.942933   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:10.957043   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:10.957054   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:10.968993   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:10.969005   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:10.995748   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:10.995759   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:11.009732   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:11.009743   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:11.021702   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:11.021714   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:11.051446   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:11.051456   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:11.066140   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:11.066153   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:11.077874   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:11.077885   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:11.115610   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:11.115624   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:13.630454   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:18.632685   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:18.633025   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:18.666079   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:18.666221   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:18.691685   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:18.691783   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:18.705045   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:18.705122   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:18.717008   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:18.717084   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:18.728329   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:18.728401   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:18.739442   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:18.739516   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:18.749539   13262 logs.go:276] 0 containers: []
	W0314 11:14:18.749551   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:18.749610   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:18.759769   13262 logs.go:276] 0 containers: []
	W0314 11:14:18.759781   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:18.759791   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:18.759797   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:18.764318   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:18.764324   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:18.778369   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:18.778380   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:18.789995   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:18.790009   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:18.807363   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:18.807374   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:18.835706   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:18.835713   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:18.871754   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:18.871765   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:18.885377   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:18.885390   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:18.899202   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:18.899214   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:18.922275   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:18.922287   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:18.935029   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:18.935043   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:18.948555   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:18.948568   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:18.960419   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:18.960430   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:18.981059   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:18.981069   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:18.999130   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:18.999140   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:21.527194   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:26.529489   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:26.529635   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:26.547309   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:26.547393   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:26.559361   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:26.559429   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:26.569503   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:26.569571   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:26.580045   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:26.580114   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:26.590052   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:26.590120   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:26.600385   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:26.600458   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:26.610155   13262 logs.go:276] 0 containers: []
	W0314 11:14:26.610169   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:26.610235   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:26.620477   13262 logs.go:276] 0 containers: []
	W0314 11:14:26.620488   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:26.620497   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:26.620504   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:26.633133   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:26.633147   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:26.647477   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:26.647490   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:26.664382   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:26.664392   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:26.688422   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:26.688432   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:26.702650   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:26.702661   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:26.719732   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:26.719743   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:26.731689   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:26.731701   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:26.761958   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:26.761967   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:26.765991   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:26.766002   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:26.801751   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:26.801762   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:26.816378   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:26.816388   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:26.827975   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:26.827989   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:26.852026   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:26.852037   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:26.867038   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:26.867049   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:29.381689   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:34.383932   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:34.384134   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:34.404777   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:34.404866   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:34.421099   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:34.421174   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:34.433428   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:34.433505   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:34.444079   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:34.444144   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:34.454677   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:34.454748   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:34.465159   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:34.465227   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:34.475454   13262 logs.go:276] 0 containers: []
	W0314 11:14:34.475466   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:34.475527   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:34.485614   13262 logs.go:276] 0 containers: []
	W0314 11:14:34.485629   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:34.485636   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:34.485642   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:34.497666   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:34.497679   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:34.514670   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:34.514683   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:34.527142   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:34.527155   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:34.562382   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:34.562394   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:34.577236   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:34.577248   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:34.592714   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:34.592726   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:34.623234   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:34.623250   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:34.655674   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:34.655688   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:34.660239   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:34.660246   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:34.685515   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:34.685526   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:34.702496   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:34.702506   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:34.713752   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:34.713764   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:34.736774   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:34.736786   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:34.749680   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:34.749691   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:37.264398   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:42.266630   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:42.266792   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:42.289017   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:42.289129   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:42.304606   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:42.304682   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:42.318147   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:42.318213   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:42.328901   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:42.328972   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:42.342350   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:42.342417   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:42.352535   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:42.352599   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:42.362528   13262 logs.go:276] 0 containers: []
	W0314 11:14:42.362540   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:42.362612   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:42.372185   13262 logs.go:276] 0 containers: []
	W0314 11:14:42.372198   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:42.372206   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:42.372213   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:42.385090   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:42.385101   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:42.407678   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:42.407688   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:42.421293   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:42.421310   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:42.450343   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:42.450351   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:42.464119   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:42.464130   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:42.479222   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:42.479234   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:42.496346   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:42.496356   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:42.520635   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:42.520644   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:42.534728   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:42.534739   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:42.545650   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:42.545660   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:42.558079   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:42.558095   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:42.575472   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:42.575485   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:42.579835   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:42.579844   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:42.632748   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:42.632760   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:45.152845   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:50.155134   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:50.155324   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:50.176526   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:50.176625   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:50.191306   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:50.191384   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:50.203120   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:50.203192   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:50.213969   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:50.214038   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:50.224955   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:50.225021   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:50.235362   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:50.235434   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:50.247352   13262 logs.go:276] 0 containers: []
	W0314 11:14:50.247366   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:50.247426   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:50.257312   13262 logs.go:276] 0 containers: []
	W0314 11:14:50.257326   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:50.257335   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:50.257342   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:50.261497   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:50.261503   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:50.279377   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:50.279393   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:50.294261   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:50.294275   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:50.330390   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:50.330402   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:50.361032   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:50.361040   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:50.396027   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:50.396043   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:14:50.409932   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:50.413109   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:50.431873   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:50.431883   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:50.447000   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:50.447015   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:50.458764   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:50.458776   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:50.482510   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:50.482520   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:50.494397   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:50.494415   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:50.518510   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:50.518520   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:50.530951   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:50.530962   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:53.046259   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:14:58.048576   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:14:58.048747   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:14:58.065701   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:14:58.065790   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:14:58.079237   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:14:58.079316   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:14:58.090618   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:14:58.090690   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:14:58.101412   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:14:58.101496   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:14:58.111884   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:14:58.111951   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:14:58.123095   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:14:58.123169   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:14:58.132947   13262 logs.go:276] 0 containers: []
	W0314 11:14:58.132959   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:14:58.133024   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:14:58.142998   13262 logs.go:276] 0 containers: []
	W0314 11:14:58.143011   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:14:58.143020   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:14:58.143025   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:14:58.160602   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:14:58.160612   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:14:58.185451   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:14:58.185458   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:14:58.197783   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:14:58.197792   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:14:58.221132   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:14:58.221142   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:14:58.233510   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:14:58.233521   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:14:58.252109   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:14:58.252120   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:14:58.266539   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:14:58.266549   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:14:58.280264   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:14:58.280276   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:14:58.294282   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:14:58.294292   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:14:58.319556   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:14:58.319567   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:14:58.331574   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:14:58.331585   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:14:58.362341   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:14:58.362350   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:14:58.366212   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:14:58.366221   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:14:58.400919   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:14:58.400930   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:00.917224   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:05.919351   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:05.919490   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:05.932217   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:05.932287   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:05.943612   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:05.943675   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:05.953757   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:05.953831   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:05.964057   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:05.964121   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:05.974063   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:05.974140   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:05.984725   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:05.984804   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:05.994634   13262 logs.go:276] 0 containers: []
	W0314 11:15:05.994652   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:05.994715   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:06.004775   13262 logs.go:276] 0 containers: []
	W0314 11:15:06.004787   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:06.004795   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:06.004801   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:06.016362   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:06.016374   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:06.052427   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:06.052438   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:06.065085   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:06.065097   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:06.088326   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:06.088338   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:06.113389   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:06.113400   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:06.143143   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:06.143153   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:06.164107   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:06.164118   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:06.178101   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:06.178111   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:06.192533   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:06.192544   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:06.203889   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:06.203900   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:06.224749   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:06.224760   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:06.242937   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:06.242950   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:06.247603   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:06.247609   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:06.265747   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:06.265757   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:08.778821   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:13.781103   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:13.781261   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:13.797929   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:13.798014   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:13.810609   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:13.810682   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:13.826517   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:13.826595   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:13.837086   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:13.837151   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:13.847469   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:13.847527   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:13.857724   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:13.857792   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:13.867214   13262 logs.go:276] 0 containers: []
	W0314 11:15:13.867226   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:13.867289   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:13.877677   13262 logs.go:276] 0 containers: []
	W0314 11:15:13.877691   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:13.877699   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:13.877708   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:13.891870   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:13.891881   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:13.903726   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:13.903739   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:13.917490   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:13.917501   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:13.932715   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:13.932726   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:13.957178   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:13.957186   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:13.992886   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:13.992898   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:14.006729   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:14.006740   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:14.019430   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:14.019442   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:14.047998   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:14.048006   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:14.060472   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:14.060484   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:14.084090   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:14.084103   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:14.096749   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:14.096764   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:14.120579   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:14.120591   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:14.138210   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:14.138224   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:16.643048   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:21.645330   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:21.645520   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:21.659644   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:21.659717   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:21.672335   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:21.672418   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:21.683299   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:21.683371   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:21.694170   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:21.694243   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:21.708210   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:21.708284   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:21.719111   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:21.719181   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:21.729202   13262 logs.go:276] 0 containers: []
	W0314 11:15:21.729215   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:21.729279   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:21.741010   13262 logs.go:276] 0 containers: []
	W0314 11:15:21.741024   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:21.741033   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:21.741039   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:21.755888   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:21.755901   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:21.769561   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:21.769573   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:21.793726   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:21.793738   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:21.808530   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:21.808540   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:21.833331   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:21.833339   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:21.844480   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:21.844492   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:21.848442   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:21.848449   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:21.883517   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:21.883528   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:21.901896   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:21.901905   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:21.917898   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:21.917908   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:21.946687   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:21.946698   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:21.961301   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:21.961311   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:21.973895   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:21.973905   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:21.991736   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:21.991748   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:24.507286   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:29.509533   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:29.509795   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:29.535031   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:29.535148   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:29.552628   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:29.552707   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:29.565714   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:29.565794   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:29.577584   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:29.577655   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:29.591850   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:29.591915   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:29.602638   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:29.602711   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:29.612908   13262 logs.go:276] 0 containers: []
	W0314 11:15:29.612919   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:29.612976   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:29.622953   13262 logs.go:276] 0 containers: []
	W0314 11:15:29.622963   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:29.622971   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:29.622976   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:29.647538   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:29.647547   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:29.685682   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:29.685695   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:29.698395   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:29.698404   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:29.716583   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:29.716594   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:29.742671   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:29.742683   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:29.759455   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:29.759466   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:29.783600   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:29.783611   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:29.797877   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:29.797886   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:29.802241   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:29.802248   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:29.814188   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:29.814200   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:29.825871   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:29.825883   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:29.838211   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:29.838223   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:29.867398   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:29.867418   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:29.888492   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:29.888504   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:32.404364   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:37.406588   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:37.406778   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:37.428578   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:37.428670   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:37.441906   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:37.441980   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:37.453210   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:37.453283   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:37.463645   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:37.463719   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:37.474432   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:37.474494   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:37.493102   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:37.493176   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:37.503721   13262 logs.go:276] 0 containers: []
	W0314 11:15:37.503733   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:37.503785   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:37.514105   13262 logs.go:276] 0 containers: []
	W0314 11:15:37.514116   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:37.514125   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:37.514130   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:37.532533   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:37.532545   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:37.551135   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:37.551145   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:37.562884   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:37.562894   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:37.591521   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:37.591533   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:37.602796   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:37.602806   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:37.626209   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:37.626215   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:37.630343   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:37.630350   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:37.670981   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:37.670992   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:37.686310   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:37.686321   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:37.702581   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:37.702590   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:37.716957   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:37.716968   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:37.731708   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:37.731719   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:37.755546   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:37.755559   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:37.770896   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:37.770910   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:40.283038   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:45.285285   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:45.285578   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:45.318424   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:45.318590   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:45.339202   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:45.339306   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:45.355160   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:45.355245   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:45.367286   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:45.367368   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:45.378091   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:45.378159   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:45.388871   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:45.388942   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:45.399032   13262 logs.go:276] 0 containers: []
	W0314 11:15:45.399049   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:45.399109   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:45.409587   13262 logs.go:276] 0 containers: []
	W0314 11:15:45.412120   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:45.412136   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:45.412147   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:45.416286   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:45.416293   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:45.430455   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:45.430466   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:45.449897   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:45.449907   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:45.474680   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:45.474686   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:45.486185   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:45.486194   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:45.516498   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:45.516508   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:45.551839   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:45.551853   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:45.564713   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:45.564724   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:45.588044   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:45.588055   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:45.602778   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:45.602791   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:45.614425   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:45.614437   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:45.626070   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:45.626082   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:45.640358   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:45.640371   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:45.655120   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:45.655132   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:48.173958   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:15:53.176612   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:15:53.176958   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:15:53.214887   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:15:53.215034   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:15:53.237482   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:15:53.237579   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:15:53.252464   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:15:53.252540   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:15:53.265147   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:15:53.265225   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:15:53.276228   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:15:53.276298   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:15:53.287285   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:15:53.287351   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:15:53.298434   13262 logs.go:276] 0 containers: []
	W0314 11:15:53.298447   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:15:53.298511   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:15:53.309734   13262 logs.go:276] 0 containers: []
	W0314 11:15:53.309745   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:15:53.309755   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:15:53.309761   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:15:53.339171   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:15:53.339182   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:15:53.352145   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:15:53.352157   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:15:53.371189   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:15:53.371201   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:15:53.382268   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:15:53.382279   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:15:53.397295   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:15:53.397305   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:15:53.415284   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:15:53.415293   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:15:53.437640   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:15:53.437651   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:15:53.449517   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:15:53.449529   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:15:53.467819   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:15:53.467830   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:15:53.492801   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:15:53.492809   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:15:53.515559   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:15:53.515570   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:15:53.519747   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:15:53.519753   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:15:53.555675   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:15:53.555692   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:15:53.576966   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:15:53.576976   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:15:56.091539   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:01.093787   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:01.093995   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:01.112699   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:01.112790   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:01.126473   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:01.126553   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:01.138051   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:01.138123   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:01.148597   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:01.148673   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:01.158749   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:01.158823   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:01.169483   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:01.169551   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:01.179798   13262 logs.go:276] 0 containers: []
	W0314 11:16:01.179809   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:01.179868   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:01.190728   13262 logs.go:276] 0 containers: []
	W0314 11:16:01.190738   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:01.190747   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:01.190753   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:01.226013   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:01.226024   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:01.238179   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:01.238191   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:01.253563   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:01.253575   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:01.258108   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:01.258116   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:01.272086   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:01.272097   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:01.285160   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:01.285174   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:01.302637   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:01.302652   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:01.325899   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:01.325910   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:01.337983   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:01.337996   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:01.369108   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:01.369123   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:01.381665   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:01.381677   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:01.395297   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:01.395312   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:01.415489   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:01.415502   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:01.438947   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:01.438963   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:03.965764   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:09.058052   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:09.058267   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:09.082640   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:09.082739   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:09.098727   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:09.098795   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:09.111380   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:09.111450   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:09.123065   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:09.123137   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:09.133167   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:09.133233   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:09.143760   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:09.143826   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:09.154410   13262 logs.go:276] 0 containers: []
	W0314 11:16:09.154423   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:09.154488   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:09.165097   13262 logs.go:276] 0 containers: []
	W0314 11:16:09.165109   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:09.165119   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:09.165125   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:09.200547   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:09.200559   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:09.223884   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:09.223895   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:09.235879   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:09.235893   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:09.265667   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:09.265676   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:09.277272   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:09.277283   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:09.294601   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:09.294613   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:09.311801   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:09.311811   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:09.334966   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:09.334974   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:09.346974   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:09.346986   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:09.351659   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:09.351666   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:09.365944   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:09.365958   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:09.379283   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:09.379293   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:09.400739   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:09.400750   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:09.415443   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:09.415457   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:11.932400   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:16.934791   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:16.935042   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:16.958886   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:16.959006   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:16.979485   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:16.979572   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:16.991411   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:16.991484   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:17.002301   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:17.002372   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:17.013094   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:17.013166   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:17.023697   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:17.023782   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:17.033925   13262 logs.go:276] 0 containers: []
	W0314 11:16:17.033937   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:17.033995   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:17.043897   13262 logs.go:276] 0 containers: []
	W0314 11:16:17.043908   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:17.043918   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:17.043924   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:17.059048   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:17.059061   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:17.082981   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:17.082993   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:17.113517   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:17.113528   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:17.117539   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:17.117548   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:17.130034   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:17.130045   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:17.145032   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:17.145043   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:17.156302   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:17.156315   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:17.179298   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:17.179308   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:17.193102   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:17.193114   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:17.204429   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:17.204444   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:17.221963   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:17.221974   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:17.244695   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:17.244706   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:17.256282   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:17.256294   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:17.290913   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:17.290926   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:19.807359   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:24.809952   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:24.810103   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:24.822711   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:24.822781   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:24.833488   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:24.833558   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:24.845290   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:24.845364   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:24.856973   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:24.857048   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:24.867491   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:24.867559   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:24.878105   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:24.878178   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:24.888469   13262 logs.go:276] 0 containers: []
	W0314 11:16:24.888480   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:24.888538   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:24.898689   13262 logs.go:276] 0 containers: []
	W0314 11:16:24.898701   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:24.898709   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:24.898715   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:24.912222   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:24.912235   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:24.916439   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:24.916445   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:24.950545   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:24.950560   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:24.968318   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:24.968330   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:24.982230   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:24.982240   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:24.993216   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:24.993227   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:25.004279   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:25.004290   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:25.016136   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:25.016148   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:25.031324   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:25.031336   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:25.045687   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:25.045700   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:25.061100   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:25.061113   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:25.078355   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:25.078365   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:25.102139   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:25.102147   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:25.129770   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:25.129778   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:27.659784   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:32.662153   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:32.662366   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:32.682657   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:32.682769   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:32.705460   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:32.705540   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:32.716678   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:32.716748   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:32.727632   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:32.727703   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:32.743180   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:32.743253   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:32.753642   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:32.753706   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:32.765498   13262 logs.go:276] 0 containers: []
	W0314 11:16:32.765513   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:32.765572   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:32.777752   13262 logs.go:276] 0 containers: []
	W0314 11:16:32.777771   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:32.777779   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:32.777784   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:32.795428   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:32.795438   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:32.808286   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:32.808298   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:32.823016   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:32.823028   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:32.836841   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:32.836853   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:32.878150   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:32.878164   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:32.892493   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:32.892504   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:32.914751   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:32.914764   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:32.929869   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:32.929879   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:32.942095   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:32.942109   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:32.958836   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:32.958847   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:32.963670   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:32.963676   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:32.976903   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:32.976914   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:32.988082   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:32.988095   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:33.011633   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:33.011643   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:35.541625   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:40.543923   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:40.544017   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:16:40.555024   13262 logs.go:276] 2 containers: [f9395aa9cac2 bc405a5fcc4c]
	I0314 11:16:40.555103   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:16:40.565496   13262 logs.go:276] 2 containers: [d36e3bec2911 8e89db56a692]
	I0314 11:16:40.565564   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:16:40.576068   13262 logs.go:276] 1 containers: [0301160bab63]
	I0314 11:16:40.576135   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:16:40.587127   13262 logs.go:276] 2 containers: [d82fc4548c26 82be34d06648]
	I0314 11:16:40.587202   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:16:40.597471   13262 logs.go:276] 1 containers: [25bb30ac6daf]
	I0314 11:16:40.597533   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:16:40.608168   13262 logs.go:276] 2 containers: [1f70f1399231 c1e3435f5898]
	I0314 11:16:40.608244   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:16:40.618057   13262 logs.go:276] 0 containers: []
	W0314 11:16:40.618068   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:16:40.618122   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:16:40.627899   13262 logs.go:276] 0 containers: []
	W0314 11:16:40.627914   13262 logs.go:278] No container was found matching "storage-provisioner"
	I0314 11:16:40.627922   13262 logs.go:123] Gathering logs for kube-apiserver [bc405a5fcc4c] ...
	I0314 11:16:40.627928   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc405a5fcc4c"
	I0314 11:16:40.640865   13262 logs.go:123] Gathering logs for kube-controller-manager [c1e3435f5898] ...
	I0314 11:16:40.640876   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1e3435f5898"
	I0314 11:16:40.658911   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:16:40.658922   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:16:40.682566   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:16:40.682579   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:16:40.712115   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:16:40.712127   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:16:40.716220   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:16:40.716228   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:16:40.750346   13262 logs.go:123] Gathering logs for kube-apiserver [f9395aa9cac2] ...
	I0314 11:16:40.750360   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f9395aa9cac2"
	I0314 11:16:40.764226   13262 logs.go:123] Gathering logs for kube-scheduler [d82fc4548c26] ...
	I0314 11:16:40.764236   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d82fc4548c26"
	I0314 11:16:40.787458   13262 logs.go:123] Gathering logs for kube-proxy [25bb30ac6daf] ...
	I0314 11:16:40.787469   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25bb30ac6daf"
	I0314 11:16:40.805664   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:16:40.805674   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:16:40.817546   13262 logs.go:123] Gathering logs for etcd [d36e3bec2911] ...
	I0314 11:16:40.817557   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d36e3bec2911"
	I0314 11:16:40.831760   13262 logs.go:123] Gathering logs for coredns [0301160bab63] ...
	I0314 11:16:40.831775   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0301160bab63"
	I0314 11:16:40.843506   13262 logs.go:123] Gathering logs for kube-scheduler [82be34d06648] ...
	I0314 11:16:40.843517   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82be34d06648"
	I0314 11:16:40.859345   13262 logs.go:123] Gathering logs for kube-controller-manager [1f70f1399231] ...
	I0314 11:16:40.859356   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f70f1399231"
	I0314 11:16:40.886032   13262 logs.go:123] Gathering logs for etcd [8e89db56a692] ...
	I0314 11:16:40.886041   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8e89db56a692"
	I0314 11:16:43.402145   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:48.404795   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:16:48.404955   13262 kubeadm.go:591] duration metric: took 4m2.971525875s to restartPrimaryControlPlane
	W0314 11:16:48.405086   13262 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 11:16:48.405121   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0314 11:16:49.344695   13262 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 11:16:49.349573   13262 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 11:16:49.352229   13262 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 11:16:49.354869   13262 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 11:16:49.354874   13262 kubeadm.go:156] found existing configuration files:
	
	I0314 11:16:49.354891   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/admin.conf
	I0314 11:16:49.357572   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 11:16:49.357599   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 11:16:49.360141   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/kubelet.conf
	I0314 11:16:49.363058   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 11:16:49.363081   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 11:16:49.366113   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/controller-manager.conf
	I0314 11:16:49.368712   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 11:16:49.368735   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 11:16:49.371405   13262 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/scheduler.conf
	I0314 11:16:49.374417   13262 kubeadm.go:162] "https://control-plane.minikube.internal:52332" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52332 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 11:16:49.374441   13262 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 11:16:49.376967   13262 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 11:16:49.394374   13262 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0314 11:16:49.394490   13262 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 11:16:49.447845   13262 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 11:16:49.447938   13262 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 11:16:49.448006   13262 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 11:16:49.500170   13262 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 11:16:49.508117   13262 out.go:204]   - Generating certificates and keys ...
	I0314 11:16:49.508155   13262 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 11:16:49.508189   13262 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 11:16:49.508229   13262 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 11:16:49.508263   13262 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 11:16:49.508301   13262 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 11:16:49.508327   13262 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 11:16:49.508362   13262 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 11:16:49.508436   13262 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 11:16:49.508497   13262 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 11:16:49.508536   13262 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 11:16:49.508559   13262 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 11:16:49.508595   13262 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 11:16:49.586364   13262 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 11:16:49.630491   13262 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 11:16:49.819305   13262 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 11:16:50.001724   13262 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 11:16:50.031103   13262 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 11:16:50.031479   13262 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 11:16:50.031598   13262 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 11:16:50.101389   13262 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 11:16:50.104752   13262 out.go:204]   - Booting up control plane ...
	I0314 11:16:50.104806   13262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 11:16:50.104855   13262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 11:16:50.104893   13262 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 11:16:50.104933   13262 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 11:16:50.105934   13262 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 11:16:54.107582   13262 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001759 seconds
	I0314 11:16:54.107642   13262 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 11:16:54.112265   13262 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 11:16:54.619559   13262 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 11:16:54.619787   13262 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-157000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 11:16:55.125455   13262 kubeadm.go:309] [bootstrap-token] Using token: xpsove.xojynfksv7i7mjeh
	I0314 11:16:55.131880   13262 out.go:204]   - Configuring RBAC rules ...
	I0314 11:16:55.131957   13262 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 11:16:55.132009   13262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 11:16:55.138884   13262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 11:16:55.139934   13262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 11:16:55.140818   13262 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 11:16:55.141678   13262 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 11:16:55.145198   13262 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 11:16:55.275826   13262 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 11:16:55.530158   13262 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 11:16:55.530658   13262 kubeadm.go:309] 
	I0314 11:16:55.530694   13262 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 11:16:55.530700   13262 kubeadm.go:309] 
	I0314 11:16:55.530751   13262 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 11:16:55.530757   13262 kubeadm.go:309] 
	I0314 11:16:55.530778   13262 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 11:16:55.530822   13262 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 11:16:55.530851   13262 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 11:16:55.530854   13262 kubeadm.go:309] 
	I0314 11:16:55.530885   13262 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 11:16:55.530888   13262 kubeadm.go:309] 
	I0314 11:16:55.530921   13262 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 11:16:55.530926   13262 kubeadm.go:309] 
	I0314 11:16:55.530959   13262 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 11:16:55.531000   13262 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 11:16:55.531042   13262 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 11:16:55.531047   13262 kubeadm.go:309] 
	I0314 11:16:55.531099   13262 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 11:16:55.531156   13262 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 11:16:55.531161   13262 kubeadm.go:309] 
	I0314 11:16:55.531225   13262 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token xpsove.xojynfksv7i7mjeh \
	I0314 11:16:55.531288   13262 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e5a4174d82744a5f88c6921b8e1e2cb9a0b16334ed79a2160efb286b25bc185 \
	I0314 11:16:55.531308   13262 kubeadm.go:309] 	--control-plane 
	I0314 11:16:55.531312   13262 kubeadm.go:309] 
	I0314 11:16:55.531382   13262 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 11:16:55.531389   13262 kubeadm.go:309] 
	I0314 11:16:55.531434   13262 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token xpsove.xojynfksv7i7mjeh \
	I0314 11:16:55.531499   13262 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8e5a4174d82744a5f88c6921b8e1e2cb9a0b16334ed79a2160efb286b25bc185 
	I0314 11:16:55.531643   13262 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 11:16:55.531691   13262 cni.go:84] Creating CNI manager for ""
	I0314 11:16:55.531703   13262 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:16:55.533133   13262 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 11:16:55.541116   13262 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 11:16:55.543986   13262 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 11:16:55.549116   13262 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 11:16:55.549155   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 11:16:55.549223   13262 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-157000 minikube.k8s.io/updated_at=2024_03_14T11_16_55_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=stopped-upgrade-157000 minikube.k8s.io/primary=true
	I0314 11:16:55.588760   13262 ops.go:34] apiserver oom_adj: -16
	I0314 11:16:55.588775   13262 kubeadm.go:1106] duration metric: took 39.654375ms to wait for elevateKubeSystemPrivileges
	W0314 11:16:55.588794   13262 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 11:16:55.588799   13262 kubeadm.go:393] duration metric: took 4m10.169230833s to StartCluster
	I0314 11:16:55.588808   13262 settings.go:142] acquiring lock: {Name:mk5ca7daa9f67a4c042500e8aa0b177318634dbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:16:55.588894   13262 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:16:55.589955   13262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/kubeconfig: {Name:mk22117ed76e85ca64a0d4fa77d593f7fc7d1176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:16:55.590147   13262 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:16:55.593174   13262 out.go:177] * Verifying Kubernetes components...
	I0314 11:16:55.590154   13262 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 11:16:55.590323   13262 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:16:55.601182   13262 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 11:16:55.601193   13262 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-157000"
	I0314 11:16:55.601215   13262 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-157000"
	W0314 11:16:55.601220   13262 addons.go:243] addon storage-provisioner should already be in state true
	I0314 11:16:55.601221   13262 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-157000"
	I0314 11:16:55.601266   13262 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-157000"
	I0314 11:16:55.601241   13262 host.go:66] Checking if "stopped-upgrade-157000" exists ...
	I0314 11:16:55.603190   13262 kapi.go:59] client config for stopped-upgrade-157000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/stopped-upgrade-157000/client.key", CAFile:"/Users/jenkins/minikube-integration/18384-10823/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105dd8630), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 11:16:55.603413   13262 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-157000"
	W0314 11:16:55.603420   13262 addons.go:243] addon default-storageclass should already be in state true
	I0314 11:16:55.603428   13262 host.go:66] Checking if "stopped-upgrade-157000" exists ...
	I0314 11:16:55.608097   13262 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 11:16:55.612050   13262 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 11:16:55.612060   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 11:16:55.612070   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	I0314 11:16:55.612820   13262 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 11:16:55.612823   13262 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 11:16:55.612828   13262 sshutil.go:53] new ssh client: &{IP:localhost Port:52297 SSHKeyPath:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/stopped-upgrade-157000/id_rsa Username:docker}
	I0314 11:16:55.681361   13262 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 11:16:55.685924   13262 api_server.go:52] waiting for apiserver process to appear ...
	I0314 11:16:55.685970   13262 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 11:16:55.689860   13262 api_server.go:72] duration metric: took 99.698667ms to wait for apiserver process to appear ...
	I0314 11:16:55.689866   13262 api_server.go:88] waiting for apiserver healthz status ...
	I0314 11:16:55.689873   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:16:55.711957   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 11:16:55.722921   13262 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 11:17:00.691963   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:00.692008   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:05.692400   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:05.692425   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:10.693171   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:10.693192   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:15.693700   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:15.693727   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:20.694394   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:20.694437   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:25.695322   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:25.695350   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0314 11:17:26.100938   13262 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0314 11:17:26.105250   13262 out.go:177] * Enabled addons: storage-provisioner
	I0314 11:17:26.113164   13262 addons.go:505] duration metric: took 30.52289075s for enable addons: enabled=[storage-provisioner]
	I0314 11:17:30.696792   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:30.696867   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:35.697560   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:35.697611   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:40.698774   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:40.698828   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:45.700364   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:45.700392   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:50.702638   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:50.702672   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:17:55.704924   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:17:55.705083   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:17:55.715560   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:17:55.715636   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:17:55.726684   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:17:55.726754   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:17:55.737135   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:17:55.737208   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:17:55.747727   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:17:55.747800   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:17:55.758610   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:17:55.758679   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:17:55.769857   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:17:55.769938   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:17:55.781600   13262 logs.go:276] 0 containers: []
	W0314 11:17:55.781614   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:17:55.781674   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:17:55.799417   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:17:55.799434   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:17:55.799440   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:17:55.815446   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:17:55.815458   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:17:55.827126   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:17:55.827139   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:17:55.852114   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:17:55.852127   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:17:55.882708   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:17:55.882716   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:17:55.887126   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:17:55.887132   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:17:55.923433   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:17:55.923447   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:17:55.938227   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:17:55.938238   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:17:55.949667   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:17:55.949679   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:17:55.961251   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:17:55.961262   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:17:55.975862   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:17:55.975872   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:17:55.987316   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:17:55.987326   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:17:55.999263   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:17:55.999274   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:17:58.520307   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:18:03.522958   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:18:03.523217   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:18:03.545131   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:18:03.545263   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:18:03.560409   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:18:03.560483   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:18:03.573049   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:18:03.573124   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:18:03.583554   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:18:03.583618   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:18:03.594272   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:18:03.594341   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:18:03.604377   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:18:03.604447   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:18:03.614465   13262 logs.go:276] 0 containers: []
	W0314 11:18:03.614480   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:18:03.614532   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:18:03.624877   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:18:03.624893   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:18:03.624898   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:18:03.645586   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:18:03.645598   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:18:03.656730   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:18:03.656742   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:18:03.668346   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:18:03.668360   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:18:03.682899   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:18:03.682909   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:18:03.696699   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:18:03.696709   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:18:03.708251   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:18:03.708262   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:18:03.719880   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:18:03.719892   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:18:03.738147   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:18:03.738158   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:18:03.749477   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:18:03.749489   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:18:03.773985   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:18:03.773995   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:18:03.805292   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:18:03.805301   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:18:03.809336   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:18:03.809342   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:18:06.344203   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:18:11.346786   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:18:11.346878   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:18:11.357539   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:18:11.357610   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:18:11.367768   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:18:11.367835   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:18:11.379344   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:18:11.379420   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:18:11.390091   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:18:11.390162   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:18:11.400912   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:18:11.400986   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:18:11.411759   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:18:11.411834   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:18:11.422935   13262 logs.go:276] 0 containers: []
	W0314 11:18:11.422947   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:18:11.423002   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:18:11.433647   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:18:11.433663   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:18:11.433669   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:18:11.438291   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:18:11.438300   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:18:11.452033   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:18:11.452043   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:18:11.467494   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:18:11.467505   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:18:11.485781   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:18:11.485791   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:18:11.496888   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:18:11.496897   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:18:11.520528   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:18:11.520540   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:18:11.531684   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:18:11.531696   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:18:11.562057   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:18:11.562065   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:18:11.596579   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:18:11.596590   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:18:11.611034   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:18:11.611044   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:18:11.622463   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:18:11.622474   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:18:11.641474   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:18:11.641486   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:18:14.154794   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:18:19.157093   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:18:19.157259   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:18:19.168093   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:18:19.168153   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:18:19.178466   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:18:19.178540   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:18:19.188977   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:18:19.189052   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:18:19.199175   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:18:19.199250   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:18:19.210678   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:18:19.210752   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:18:19.221560   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:18:19.221630   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:18:19.233261   13262 logs.go:276] 0 containers: []
	W0314 11:18:19.233274   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:18:19.233334   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:18:19.243564   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:18:19.243581   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:18:19.243586   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:18:19.274957   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:18:19.274967   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:18:19.309155   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:18:19.309166   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:18:19.325199   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:18:19.325211   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:18:19.336612   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:18:19.336623   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:18:19.348141   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:18:19.348153   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:18:19.359717   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:18:19.359728   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:18:19.370961   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:18:19.370973   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:18:19.375560   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:18:19.375566   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:18:19.389045   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:18:19.389059   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:18:19.400721   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:18:19.400733   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:18:19.416040   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:18:19.416051   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:18:19.433601   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:18:19.433613   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:18:21.959204   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:18:26.961415   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:18:26.961523   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:18:26.972296   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:18:26.972365   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:18:26.983416   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:18:26.983485   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:18:26.994938   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:18:26.995010   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:18:27.006814   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:18:27.006886   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:18:27.017828   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:18:27.017900   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:18:27.028969   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:18:27.029033   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:18:27.039245   13262 logs.go:276] 0 containers: []
	W0314 11:18:27.039256   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:18:27.039302   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:18:27.054087   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:18:27.054103   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:18:27.054109   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:18:27.058462   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:18:27.058468   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:18:27.072548   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:18:27.072558   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:18:27.084060   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:18:27.084069   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:18:27.100544   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:18:27.100557   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:18:27.114512   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:18:27.114522   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:18:27.133699   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:18:27.133710   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:18:27.145856   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:18:27.145867   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:18:27.177733   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:18:27.177743   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:18:27.212243   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:18:27.212259   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:18:27.226174   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:18:27.226183   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:18:27.237875   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:18:27.237889   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:18:27.249940   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:18:27.249954   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:18:29.778583   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:18:34.781178   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:18:34.781300   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:18:34.793528   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:18:34.793604   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:18:34.805597   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:18:34.805675   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:18:34.818290   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:18:34.818362   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:18:34.829982   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:18:34.830044   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:18:34.841644   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:18:34.841717   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:18:34.852800   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:18:34.852874   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:18:34.863361   13262 logs.go:276] 0 containers: []
	W0314 11:18:34.863383   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:18:34.863439   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:18:34.874194   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:18:34.874209   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:18:34.874214   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:18:34.891207   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:18:34.891218   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:18:34.910803   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:18:34.910813   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:18:34.922979   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:18:34.922991   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:18:34.948355   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:18:34.948363   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:18:34.960307   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:18:34.960320   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:18:34.999786   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:18:34.999796   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:18:35.015279   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:18:35.015292   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:18:35.027200   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:18:35.027212   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:18:35.038555   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:18:35.038565   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:18:35.050164   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:18:35.050175   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:18:35.080296   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:18:35.080304   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:18:35.084155   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:18:35.084160   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:18:37.599178   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:18:42.599715   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:18:42.600123   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:18:42.646064   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:18:42.646183   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:18:42.665814   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:18:42.665912   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:18:42.680128   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:18:42.680201   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:18:42.692022   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:18:42.692087   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:18:42.703061   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:18:42.703136   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:18:42.717299   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:18:42.717367   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:18:42.728321   13262 logs.go:276] 0 containers: []
	W0314 11:18:42.728333   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:18:42.728387   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:18:42.738646   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:18:42.738661   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:18:42.738668   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:18:42.773541   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:18:42.773553   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:18:42.787533   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:18:42.787546   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:18:42.804636   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:18:42.804646   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:18:42.829390   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:18:42.829397   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:18:42.840663   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:18:42.840674   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:18:42.871465   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:18:42.871473   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:18:42.875802   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:18:42.875810   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:18:42.887250   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:18:42.887262   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:18:42.901938   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:18:42.901949   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:18:42.915300   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:18:42.915310   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:18:42.927120   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:18:42.927130   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:18:42.941069   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:18:42.941081   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:18:45.454725   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:18:50.457521   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:18:50.457962   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:18:50.497139   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:18:50.497272   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:18:50.518149   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:18:50.518248   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:18:50.536187   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:18:50.536267   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:18:50.548083   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:18:50.548143   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:18:50.558847   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:18:50.558907   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:18:50.569413   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:18:50.569484   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:18:50.579273   13262 logs.go:276] 0 containers: []
	W0314 11:18:50.579285   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:18:50.579334   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:18:50.589455   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:18:50.589469   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:18:50.589475   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:18:50.601714   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:18:50.601725   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:18:50.613177   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:18:50.613188   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:18:50.642658   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:18:50.642666   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:18:50.647370   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:18:50.647376   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:18:50.659636   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:18:50.659646   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:18:50.671187   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:18:50.671197   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:18:50.682410   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:18:50.682421   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:18:50.697160   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:18:50.697170   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:18:50.714193   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:18:50.714203   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:18:50.738902   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:18:50.738911   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:18:50.772974   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:18:50.772986   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:18:50.792328   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:18:50.792339   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:18:53.313319   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:18:58.315687   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:18:58.315943   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:18:58.341794   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:18:58.341915   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:18:58.359820   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:18:58.359903   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:18:58.372599   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:18:58.372667   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:18:58.383566   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:18:58.383635   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:18:58.394347   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:18:58.394409   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:18:58.404735   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:18:58.404792   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:18:58.414504   13262 logs.go:276] 0 containers: []
	W0314 11:18:58.414515   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:18:58.414570   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:18:58.424709   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:18:58.424724   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:18:58.424730   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:18:58.441820   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:18:58.441832   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:18:58.457604   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:18:58.457617   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:18:58.498737   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:18:58.498748   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:18:58.512700   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:18:58.512713   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:18:58.524610   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:18:58.524623   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:18:58.539337   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:18:58.539346   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:18:58.554682   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:18:58.554695   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:18:58.566009   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:18:58.566022   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:18:58.595373   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:18:58.595381   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:18:58.599684   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:18:58.599692   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:18:58.613909   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:18:58.613919   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:18:58.625132   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:18:58.625142   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:19:01.152244   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:19:06.154916   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:19:06.155341   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:19:06.195787   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:19:06.195924   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:19:06.217215   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:19:06.217351   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:19:06.232554   13262 logs.go:276] 2 containers: [60aada0d97ab 92093e266d4d]
	I0314 11:19:06.232630   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:19:06.244829   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:19:06.244892   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:19:06.255885   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:19:06.255953   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:19:06.266486   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:19:06.266547   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:19:06.276735   13262 logs.go:276] 0 containers: []
	W0314 11:19:06.276746   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:19:06.276801   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:19:06.287083   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:19:06.287098   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:19:06.287103   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:19:06.292402   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:19:06.292410   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:19:06.329496   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:19:06.329510   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:19:06.344026   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:19:06.344041   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:19:06.359557   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:19:06.359567   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:19:06.371479   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:19:06.371493   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:19:06.382908   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:19:06.382920   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:19:06.413193   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:19:06.413203   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:19:06.428736   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:19:06.428746   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:19:06.442778   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:19:06.442788   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:19:06.460308   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:19:06.460319   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:19:06.471432   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:19:06.471440   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:19:06.494889   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:19:06.494896   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:19:09.010913   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:19:14.013380   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:19:14.013762   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:19:14.054525   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:19:14.054664   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:19:14.076171   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:19:14.076262   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:19:14.091159   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:19:14.091238   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:19:14.106755   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:19:14.106826   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:19:14.116774   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:19:14.116843   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:19:14.126639   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:19:14.126698   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:19:14.136404   13262 logs.go:276] 0 containers: []
	W0314 11:19:14.136416   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:19:14.136474   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:19:14.146822   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:19:14.146837   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:19:14.146844   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:19:14.158013   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:19:14.158024   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:19:14.172908   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:19:14.172917   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:19:14.183780   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:19:14.183794   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:19:14.199203   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:19:14.199214   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:19:14.211215   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:19:14.211225   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:19:14.227832   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:19:14.227842   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:19:14.261948   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:19:14.261959   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:19:14.276192   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:19:14.276202   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:19:14.294427   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:19:14.294437   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:19:14.305989   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:19:14.305999   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:19:14.318761   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:19:14.318773   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:19:14.350195   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:19:14.350204   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:19:14.361277   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:19:14.361289   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:19:14.385753   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:19:14.385760   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:19:16.892262   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:19:21.893370   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:19:21.893790   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:19:21.932644   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:19:21.932772   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:19:21.955618   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:19:21.955735   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:19:21.971913   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:19:21.971997   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:19:21.985278   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:19:21.985349   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:19:21.996048   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:19:21.996113   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:19:22.006733   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:19:22.006791   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:19:22.017502   13262 logs.go:276] 0 containers: []
	W0314 11:19:22.017511   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:19:22.017564   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:19:22.028461   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:19:22.028479   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:19:22.028484   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:19:22.059618   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:19:22.059630   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:19:22.074155   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:19:22.074168   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:19:22.085727   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:19:22.085738   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:19:22.101306   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:19:22.101316   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:19:22.125541   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:19:22.125557   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:19:22.137693   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:19:22.137705   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:19:22.150219   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:19:22.150233   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:19:22.162715   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:19:22.162728   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:19:22.180828   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:19:22.180838   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:19:22.184955   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:19:22.184961   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:19:22.219370   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:19:22.219383   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:19:22.231612   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:19:22.231625   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:19:22.243317   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:19:22.243328   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:19:22.261141   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:19:22.261151   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:19:24.775192   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:19:29.777869   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:19:29.778165   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:19:29.811763   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:19:29.811887   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:19:29.832362   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:19:29.832440   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:19:29.847236   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:19:29.847321   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:19:29.859153   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:19:29.859210   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:19:29.874264   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:19:29.874332   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:19:29.885535   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:19:29.885594   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:19:29.896841   13262 logs.go:276] 0 containers: []
	W0314 11:19:29.896852   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:19:29.896902   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:19:29.907424   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:19:29.907442   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:19:29.907447   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:19:29.919559   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:19:29.919570   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:19:29.951090   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:19:29.951100   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:19:29.985328   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:19:29.985340   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:19:30.003634   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:19:30.003643   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:19:30.007863   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:19:30.007868   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:19:30.028187   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:19:30.028196   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:19:30.041026   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:19:30.041040   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:19:30.053988   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:19:30.053999   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:19:30.071718   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:19:30.071727   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:19:30.096238   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:19:30.096245   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:19:30.111277   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:19:30.111287   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:19:30.123801   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:19:30.123810   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:19:30.137551   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:19:30.137562   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:19:30.149728   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:19:30.149740   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:19:32.663309   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:19:37.665561   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:19:37.665794   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:19:37.692787   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:19:37.692904   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:19:37.709762   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:19:37.709839   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:19:37.723176   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:19:37.723253   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:19:37.734820   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:19:37.734886   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:19:37.745817   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:19:37.745884   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:19:37.761936   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:19:37.762008   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:19:37.773007   13262 logs.go:276] 0 containers: []
	W0314 11:19:37.773018   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:19:37.773076   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:19:37.784205   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:19:37.784222   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:19:37.784228   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:19:37.818689   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:19:37.818702   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:19:37.833274   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:19:37.833285   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:19:37.856205   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:19:37.856214   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:19:37.867879   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:19:37.867892   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:19:37.880181   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:19:37.880194   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:19:37.911636   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:19:37.911648   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:19:37.923291   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:19:37.923304   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:19:37.939296   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:19:37.939309   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:19:37.951867   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:19:37.951877   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:19:37.969299   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:19:37.969308   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:19:37.981565   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:19:37.981574   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:19:38.005935   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:19:38.005943   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:19:38.036956   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:19:38.036965   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:19:38.040926   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:19:38.040933   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:19:40.555033   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:19:45.557349   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:19:45.557733   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:19:45.594648   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:19:45.594773   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:19:45.618479   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:19:45.618592   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:19:45.635945   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:19:45.636027   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:19:45.650207   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:19:45.650282   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:19:45.665383   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:19:45.665444   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:19:45.676786   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:19:45.676847   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:19:45.691105   13262 logs.go:276] 0 containers: []
	W0314 11:19:45.691119   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:19:45.691178   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:19:45.702296   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:19:45.702314   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:19:45.702318   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:19:45.714222   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:19:45.714236   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:19:45.731713   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:19:45.731727   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:19:45.772408   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:19:45.772419   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:19:45.784609   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:19:45.784621   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:19:45.796750   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:19:45.796764   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:19:45.812972   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:19:45.812983   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:19:45.824544   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:19:45.824556   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:19:45.855316   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:19:45.855325   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:19:45.880110   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:19:45.880117   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:19:45.897900   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:19:45.897912   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:19:45.912341   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:19:45.912352   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:19:45.925130   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:19:45.925140   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:19:45.937061   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:19:45.937070   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:19:45.956330   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:19:45.956339   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:19:48.461016   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:19:53.463197   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:19:53.463339   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:19:53.478085   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:19:53.478171   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:19:53.490048   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:19:53.490122   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:19:53.500866   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:19:53.500934   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:19:53.511422   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:19:53.511483   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:19:53.522027   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:19:53.522093   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:19:53.532099   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:19:53.532180   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:19:53.542792   13262 logs.go:276] 0 containers: []
	W0314 11:19:53.542804   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:19:53.542878   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:19:53.553506   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:19:53.553525   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:19:53.553531   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:19:53.558071   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:19:53.558078   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:19:53.574189   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:19:53.574199   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:19:53.586041   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:19:53.586052   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:19:53.597629   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:19:53.597640   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:19:53.608716   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:19:53.608727   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:19:53.622119   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:19:53.622128   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:19:53.641393   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:19:53.641401   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:19:53.672169   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:19:53.672179   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:19:53.686634   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:19:53.686645   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:19:53.705569   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:19:53.705580   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:19:53.717489   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:19:53.717500   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:19:53.750759   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:19:53.750770   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:19:53.762401   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:19:53.762411   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:19:53.785300   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:19:53.785309   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:19:56.312762   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:20:01.315546   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:20:01.315968   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:20:01.350925   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:20:01.351037   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:20:01.372474   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:20:01.372566   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:20:01.392926   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:20:01.392998   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:20:01.405376   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:20:01.405445   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:20:01.420746   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:20:01.420800   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:20:01.434925   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:20:01.434999   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:20:01.445074   13262 logs.go:276] 0 containers: []
	W0314 11:20:01.445085   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:20:01.445139   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:20:01.455716   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:20:01.455733   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:20:01.455737   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:20:01.473775   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:20:01.473788   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:20:01.497803   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:20:01.497810   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:20:01.526715   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:20:01.526722   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:20:01.530648   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:20:01.530656   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:20:01.545114   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:20:01.545127   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:20:01.556972   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:20:01.556982   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:20:01.568785   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:20:01.568799   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:20:01.579576   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:20:01.579587   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:20:01.591350   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:20:01.591362   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:20:01.602876   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:20:01.602890   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:20:01.614260   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:20:01.614272   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:20:01.625907   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:20:01.625919   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:20:01.659874   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:20:01.659887   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:20:01.674266   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:20:01.674278   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:20:04.190539   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:20:09.192745   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:20:09.193008   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:20:09.215958   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:20:09.216073   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:20:09.232162   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:20:09.232242   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:20:09.245201   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:20:09.245277   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:20:09.255961   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:20:09.256032   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:20:09.266168   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:20:09.266235   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:20:09.276589   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:20:09.276652   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:20:09.286630   13262 logs.go:276] 0 containers: []
	W0314 11:20:09.286641   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:20:09.286693   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:20:09.296917   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:20:09.296935   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:20:09.296940   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:20:09.301386   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:20:09.301393   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:20:09.314739   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:20:09.314749   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:20:09.333892   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:20:09.333902   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:20:09.345388   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:20:09.345400   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:20:09.357385   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:20:09.357397   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:20:09.387383   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:20:09.387391   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:20:09.401236   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:20:09.401248   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:20:09.412904   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:20:09.412916   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:20:09.430112   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:20:09.430123   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:20:09.441704   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:20:09.441717   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:20:09.465443   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:20:09.465450   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:20:09.498390   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:20:09.498400   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:20:09.512612   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:20:09.512623   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:20:09.524780   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:20:09.524794   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:20:12.040949   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:20:17.043243   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:20:17.043370   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:20:17.054033   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:20:17.054098   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:20:17.065885   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:20:17.065953   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:20:17.078044   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:20:17.078123   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:20:17.090351   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:20:17.090396   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:20:17.100538   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:20:17.100606   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:20:17.112375   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:20:17.112426   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:20:17.124476   13262 logs.go:276] 0 containers: []
	W0314 11:20:17.124492   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:20:17.124563   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:20:17.140585   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:20:17.140602   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:20:17.140608   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:20:17.155577   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:20:17.155589   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:20:17.167687   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:20:17.167698   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:20:17.181478   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:20:17.181488   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:20:17.205672   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:20:17.205688   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:20:17.218397   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:20:17.218413   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:20:17.224128   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:20:17.224141   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:20:17.260963   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:20:17.260975   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:20:17.281109   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:20:17.281119   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:20:17.293503   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:20:17.293514   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:20:17.309362   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:20:17.309376   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:20:17.326999   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:20:17.327012   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:20:17.350536   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:20:17.350554   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:20:17.363762   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:20:17.363774   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:20:17.376677   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:20:17.376689   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:20:19.909953   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:20:24.912102   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:20:24.912266   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:20:24.927049   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:20:24.927127   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:20:24.939658   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:20:24.939726   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:20:24.951154   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:20:24.951235   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:20:24.963558   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:20:24.963628   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:20:24.981967   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:20:24.982040   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:20:24.993658   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:20:24.993729   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:20:25.005297   13262 logs.go:276] 0 containers: []
	W0314 11:20:25.005310   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:20:25.005374   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:20:25.021180   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:20:25.021201   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:20:25.021207   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:20:25.034245   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:20:25.034258   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:20:25.048969   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:20:25.048979   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:20:25.068408   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:20:25.068420   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:20:25.098877   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:20:25.098886   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:20:25.103452   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:20:25.103463   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:20:25.139984   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:20:25.139996   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:20:25.151882   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:20:25.151893   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:20:25.166465   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:20:25.166475   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:20:25.177725   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:20:25.177739   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:20:25.194937   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:20:25.194947   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:20:25.206251   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:20:25.206260   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:20:25.229837   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:20:25.229844   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:20:25.244169   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:20:25.244178   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:20:25.257384   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:20:25.257397   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:20:27.770310   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:20:32.772661   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:20:32.773040   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:20:32.806022   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:20:32.806158   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:20:32.825094   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:20:32.825180   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:20:32.839320   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:20:32.839396   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:20:32.851318   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:20:32.851380   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:20:32.861624   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:20:32.861688   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:20:32.872157   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:20:32.872228   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:20:32.882001   13262 logs.go:276] 0 containers: []
	W0314 11:20:32.882012   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:20:32.882068   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:20:32.892025   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:20:32.892043   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:20:32.892049   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:20:32.903256   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:20:32.903270   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:20:32.917325   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:20:32.917336   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:20:32.931241   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:20:32.931255   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:20:32.943143   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:20:32.943156   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:20:32.959197   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:20:32.959207   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:20:32.976479   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:20:32.976489   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:20:33.006979   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:20:33.006987   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:20:33.021583   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:20:33.021593   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:20:33.033070   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:20:33.033080   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:20:33.043986   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:20:33.043999   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:20:33.048497   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:20:33.048505   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:20:33.082516   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:20:33.082528   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:20:33.094420   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:20:33.094430   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:20:33.110855   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:20:33.110868   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:20:35.636489   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:20:40.638489   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:20:40.638556   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:20:40.651312   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:20:40.651390   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:20:40.662760   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:20:40.662812   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:20:40.673633   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:20:40.673695   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:20:40.684936   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:20:40.685014   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:20:40.696456   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:20:40.696526   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:20:40.707711   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:20:40.707804   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:20:40.718989   13262 logs.go:276] 0 containers: []
	W0314 11:20:40.719002   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:20:40.719079   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:20:40.731470   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:20:40.731490   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:20:40.731495   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:20:40.743672   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:20:40.743684   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:20:40.748868   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:20:40.748878   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:20:40.772268   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:20:40.772276   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:20:40.788843   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:20:40.788854   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:20:40.819725   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:20:40.819738   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:20:40.858925   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:20:40.858935   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:20:40.879669   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:20:40.879679   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:20:40.905241   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:20:40.905251   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:20:40.917566   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:20:40.917579   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:20:40.931462   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:20:40.931477   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:20:40.944178   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:20:40.944189   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:20:40.961763   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:20:40.961774   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:20:40.979336   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:20:40.979348   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:20:40.991557   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:20:40.991567   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:20:43.505925   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:20:48.508769   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:20:48.509124   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0314 11:20:48.547886   13262 logs.go:276] 1 containers: [d1542ac9663a]
	I0314 11:20:48.548012   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0314 11:20:48.569946   13262 logs.go:276] 1 containers: [bd019f4d619a]
	I0314 11:20:48.570057   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0314 11:20:48.586091   13262 logs.go:276] 4 containers: [e5c26b613394 a3e7f4c263c9 60aada0d97ab 92093e266d4d]
	I0314 11:20:48.586168   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0314 11:20:48.598776   13262 logs.go:276] 1 containers: [96e093b518ce]
	I0314 11:20:48.598845   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0314 11:20:48.609253   13262 logs.go:276] 1 containers: [a1479f7acffc]
	I0314 11:20:48.609315   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0314 11:20:48.619575   13262 logs.go:276] 1 containers: [f5aed84f8939]
	I0314 11:20:48.619636   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0314 11:20:48.630174   13262 logs.go:276] 0 containers: []
	W0314 11:20:48.630189   13262 logs.go:278] No container was found matching "kindnet"
	I0314 11:20:48.630248   13262 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0314 11:20:48.640567   13262 logs.go:276] 1 containers: [30a94033f9ad]
	I0314 11:20:48.640582   13262 logs.go:123] Gathering logs for kubelet ...
	I0314 11:20:48.640587   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 11:20:48.671722   13262 logs.go:123] Gathering logs for storage-provisioner [30a94033f9ad] ...
	I0314 11:20:48.671731   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 30a94033f9ad"
	I0314 11:20:48.685164   13262 logs.go:123] Gathering logs for container status ...
	I0314 11:20:48.685173   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 11:20:48.696984   13262 logs.go:123] Gathering logs for describe nodes ...
	I0314 11:20:48.696995   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 11:20:48.730964   13262 logs.go:123] Gathering logs for kube-apiserver [d1542ac9663a] ...
	I0314 11:20:48.730976   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d1542ac9663a"
	I0314 11:20:48.746552   13262 logs.go:123] Gathering logs for etcd [bd019f4d619a] ...
	I0314 11:20:48.746564   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd019f4d619a"
	I0314 11:20:48.760498   13262 logs.go:123] Gathering logs for coredns [60aada0d97ab] ...
	I0314 11:20:48.760509   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60aada0d97ab"
	I0314 11:20:48.774053   13262 logs.go:123] Gathering logs for kube-controller-manager [f5aed84f8939] ...
	I0314 11:20:48.774064   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f5aed84f8939"
	I0314 11:20:48.791715   13262 logs.go:123] Gathering logs for Docker ...
	I0314 11:20:48.791725   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0314 11:20:48.815188   13262 logs.go:123] Gathering logs for dmesg ...
	I0314 11:20:48.815196   13262 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 11:20:48.819475   13262 logs.go:123] Gathering logs for coredns [e5c26b613394] ...
	I0314 11:20:48.819481   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5c26b613394"
	I0314 11:20:48.831384   13262 logs.go:123] Gathering logs for coredns [92093e266d4d] ...
	I0314 11:20:48.831394   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92093e266d4d"
	I0314 11:20:48.842450   13262 logs.go:123] Gathering logs for kube-scheduler [96e093b518ce] ...
	I0314 11:20:48.842460   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 96e093b518ce"
	I0314 11:20:48.857116   13262 logs.go:123] Gathering logs for kube-proxy [a1479f7acffc] ...
	I0314 11:20:48.857126   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1479f7acffc"
	I0314 11:20:48.872218   13262 logs.go:123] Gathering logs for coredns [a3e7f4c263c9] ...
	I0314 11:20:48.872227   13262 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3e7f4c263c9"
	I0314 11:20:51.385273   13262 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0314 11:20:56.387583   13262 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 11:20:56.391323   13262 out.go:177] 
	W0314 11:20:56.395358   13262 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0314 11:20:56.395364   13262 out.go:239] * 
	* 
	W0314 11:20:56.395857   13262 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:20:56.411334   13262 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-157000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (579.83s)

                                                
                                    
x
+
TestPause/serial/Start (9.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-512000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-512000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.92290225s)

                                                
                                                
-- stdout --
	* [pause-512000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-512000" primary control-plane node in "pause-512000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-512000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-512000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-512000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-512000 -n pause-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-512000 -n pause-512000: exit status 7 (50.774375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-893000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-893000 --driver=qemu2 : exit status 80 (9.761099708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-893000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-893000" primary control-plane node in "NoKubernetes-893000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-893000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-893000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-893000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-893000 -n NoKubernetes-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-893000 -n NoKubernetes-893000: exit status 7 (65.95925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-893000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-893000 --no-kubernetes --driver=qemu2 : exit status 80 (5.837043875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-893000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-893000
	* Restarting existing qemu2 VM for "NoKubernetes-893000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-893000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-893000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-893000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-893000 -n NoKubernetes-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-893000 -n NoKubernetes-893000: exit status 7 (53.58175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-893000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-893000 --no-kubernetes --driver=qemu2 : exit status 80 (5.837465584s)

                                                
                                                
-- stdout --
	* [NoKubernetes-893000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-893000
	* Restarting existing qemu2 VM for "NoKubernetes-893000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-893000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-893000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-893000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-893000 -n NoKubernetes-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-893000 -n NoKubernetes-893000: exit status 7 (40.417208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-893000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-893000 --driver=qemu2 : exit status 80 (5.87908775s)

                                                
                                                
-- stdout --
	* [NoKubernetes-893000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-893000
	* Restarting existing qemu2 VM for "NoKubernetes-893000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-893000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-893000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-893000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-893000 -n NoKubernetes-893000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-893000 -n NoKubernetes-893000: exit status 7 (68.560333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-893000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.182681292s)

                                                
                                                
-- stdout --
	* [auto-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-912000" primary control-plane node in "auto-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:19:40.421365   13473 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:19:40.421504   13473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:19:40.421508   13473 out.go:304] Setting ErrFile to fd 2...
	I0314 11:19:40.421510   13473 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:19:40.421628   13473 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:19:40.422705   13473 out.go:298] Setting JSON to false
	I0314 11:19:40.439100   13473 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8352,"bootTime":1710432028,"procs":382,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:19:40.439170   13473 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:19:40.443967   13473 out.go:177] * [auto-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:19:40.450883   13473 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:19:40.450926   13473 notify.go:220] Checking for updates...
	I0314 11:19:40.456904   13473 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:19:40.459878   13473 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:19:40.462888   13473 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:19:40.465778   13473 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:19:40.468857   13473 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:19:40.472302   13473 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:19:40.472372   13473 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:19:40.472417   13473 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:19:40.475800   13473 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:19:40.482896   13473 start.go:297] selected driver: qemu2
	I0314 11:19:40.482902   13473 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:19:40.482908   13473 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:19:40.485200   13473 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:19:40.486676   13473 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:19:40.489974   13473 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:19:40.490002   13473 cni.go:84] Creating CNI manager for ""
	I0314 11:19:40.490017   13473 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:19:40.490021   13473 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:19:40.490054   13473 start.go:340] cluster config:
	{Name:auto-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:19:40.494582   13473 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:19:40.501907   13473 out.go:177] * Starting "auto-912000" primary control-plane node in "auto-912000" cluster
	I0314 11:19:40.505936   13473 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:19:40.505953   13473 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:19:40.505968   13473 cache.go:56] Caching tarball of preloaded images
	I0314 11:19:40.506026   13473 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:19:40.506032   13473 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:19:40.506106   13473 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/auto-912000/config.json ...
	I0314 11:19:40.506118   13473 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/auto-912000/config.json: {Name:mk02b1bf9451d216aa55bdb96db83cc05e597b2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:19:40.506328   13473 start.go:360] acquireMachinesLock for auto-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:19:40.506359   13473 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "auto-912000"
	I0314 11:19:40.506371   13473 start.go:93] Provisioning new machine with config: &{Name:auto-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:19:40.506400   13473 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:19:40.513842   13473 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:19:40.529768   13473 start.go:159] libmachine.API.Create for "auto-912000" (driver="qemu2")
	I0314 11:19:40.529794   13473 client.go:168] LocalClient.Create starting
	I0314 11:19:40.529853   13473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:19:40.529881   13473 main.go:141] libmachine: Decoding PEM data...
	I0314 11:19:40.529891   13473 main.go:141] libmachine: Parsing certificate...
	I0314 11:19:40.529934   13473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:19:40.529956   13473 main.go:141] libmachine: Decoding PEM data...
	I0314 11:19:40.529963   13473 main.go:141] libmachine: Parsing certificate...
	I0314 11:19:40.530301   13473 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:19:40.671563   13473 main.go:141] libmachine: Creating SSH key...
	I0314 11:19:41.115666   13473 main.go:141] libmachine: Creating Disk image...
	I0314 11:19:41.115681   13473 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:19:41.115963   13473 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2
	I0314 11:19:41.129237   13473 main.go:141] libmachine: STDOUT: 
	I0314 11:19:41.129259   13473 main.go:141] libmachine: STDERR: 
	I0314 11:19:41.129328   13473 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2 +20000M
	I0314 11:19:41.140616   13473 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:19:41.140630   13473 main.go:141] libmachine: STDERR: 
	I0314 11:19:41.140645   13473 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2
	I0314 11:19:41.140650   13473 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:19:41.140690   13473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:ee:35:5d:c3:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2
	I0314 11:19:41.142510   13473 main.go:141] libmachine: STDOUT: 
	I0314 11:19:41.142526   13473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:19:41.142543   13473 client.go:171] duration metric: took 612.740292ms to LocalClient.Create
	I0314 11:19:43.144731   13473 start.go:128] duration metric: took 2.638290541s to createHost
	I0314 11:19:43.144825   13473 start.go:83] releasing machines lock for "auto-912000", held for 2.638449666s
	W0314 11:19:43.144872   13473 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:19:43.158407   13473 out.go:177] * Deleting "auto-912000" in qemu2 ...
	W0314 11:19:43.183915   13473 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:19:43.183951   13473 start.go:728] Will try again in 5 seconds ...
	I0314 11:19:48.186182   13473 start.go:360] acquireMachinesLock for auto-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:19:48.186617   13473 start.go:364] duration metric: took 324.458µs to acquireMachinesLock for "auto-912000"
	I0314 11:19:48.186712   13473 start.go:93] Provisioning new machine with config: &{Name:auto-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:19:48.186882   13473 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:19:48.194371   13473 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:19:48.241142   13473 start.go:159] libmachine.API.Create for "auto-912000" (driver="qemu2")
	I0314 11:19:48.241205   13473 client.go:168] LocalClient.Create starting
	I0314 11:19:48.241351   13473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:19:48.241426   13473 main.go:141] libmachine: Decoding PEM data...
	I0314 11:19:48.241445   13473 main.go:141] libmachine: Parsing certificate...
	I0314 11:19:48.241514   13473 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:19:48.241558   13473 main.go:141] libmachine: Decoding PEM data...
	I0314 11:19:48.241576   13473 main.go:141] libmachine: Parsing certificate...
	I0314 11:19:48.242292   13473 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:19:48.394281   13473 main.go:141] libmachine: Creating SSH key...
	I0314 11:19:48.507287   13473 main.go:141] libmachine: Creating Disk image...
	I0314 11:19:48.507293   13473 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:19:48.507520   13473 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2
	I0314 11:19:48.519538   13473 main.go:141] libmachine: STDOUT: 
	I0314 11:19:48.519556   13473 main.go:141] libmachine: STDERR: 
	I0314 11:19:48.519616   13473 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2 +20000M
	I0314 11:19:48.530421   13473 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:19:48.530438   13473 main.go:141] libmachine: STDERR: 
	I0314 11:19:48.530451   13473 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2
	I0314 11:19:48.530456   13473 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:19:48.530499   13473 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:fd:0b:6f:7a:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/auto-912000/disk.qcow2
	I0314 11:19:48.532310   13473 main.go:141] libmachine: STDOUT: 
	I0314 11:19:48.532324   13473 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:19:48.532337   13473 client.go:171] duration metric: took 291.126417ms to LocalClient.Create
	I0314 11:19:50.533779   13473 start.go:128] duration metric: took 2.346828583s to createHost
	I0314 11:19:50.533863   13473 start.go:83] releasing machines lock for "auto-912000", held for 2.347221125s
	W0314 11:19:50.534149   13473 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:19:50.544938   13473 out.go:177] 
	W0314 11:19:50.548030   13473 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:19:50.548056   13473 out.go:239] * 
	* 
	W0314 11:19:50.549395   13473 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:19:50.559994   13473 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.793528083s)

                                                
                                                
-- stdout --
	* [kindnet-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-912000" primary control-plane node in "kindnet-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:19:52.974440   13583 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:19:52.974589   13583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:19:52.974593   13583 out.go:304] Setting ErrFile to fd 2...
	I0314 11:19:52.974596   13583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:19:52.974704   13583 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:19:52.975736   13583 out.go:298] Setting JSON to false
	I0314 11:19:52.991709   13583 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8364,"bootTime":1710432028,"procs":380,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:19:52.991771   13583 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:19:52.997345   13583 out.go:177] * [kindnet-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:19:53.005279   13583 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:19:53.008384   13583 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:19:53.005335   13583 notify.go:220] Checking for updates...
	I0314 11:19:53.013307   13583 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:19:53.016330   13583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:19:53.019323   13583 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:19:53.022301   13583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:19:53.025628   13583 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:19:53.025693   13583 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:19:53.025739   13583 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:19:53.030341   13583 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:19:53.037289   13583 start.go:297] selected driver: qemu2
	I0314 11:19:53.037295   13583 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:19:53.037304   13583 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:19:53.039647   13583 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:19:53.042323   13583 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:19:53.045413   13583 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:19:53.045452   13583 cni.go:84] Creating CNI manager for "kindnet"
	I0314 11:19:53.045456   13583 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 11:19:53.045482   13583 start.go:340] cluster config:
	{Name:kindnet-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:19:53.049721   13583 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:19:53.057109   13583 out.go:177] * Starting "kindnet-912000" primary control-plane node in "kindnet-912000" cluster
	I0314 11:19:53.061289   13583 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:19:53.061303   13583 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:19:53.061312   13583 cache.go:56] Caching tarball of preloaded images
	I0314 11:19:53.061370   13583 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:19:53.061380   13583 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:19:53.061434   13583 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/kindnet-912000/config.json ...
	I0314 11:19:53.061447   13583 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/kindnet-912000/config.json: {Name:mkb28f6bfbaede55ba73e9a2b3b43694b4df2832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:19:53.061774   13583 start.go:360] acquireMachinesLock for kindnet-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:19:53.061809   13583 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "kindnet-912000"
	I0314 11:19:53.061821   13583 start.go:93] Provisioning new machine with config: &{Name:kindnet-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:19:53.061860   13583 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:19:53.070256   13583 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:19:53.084920   13583 start.go:159] libmachine.API.Create for "kindnet-912000" (driver="qemu2")
	I0314 11:19:53.084950   13583 client.go:168] LocalClient.Create starting
	I0314 11:19:53.085010   13583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:19:53.085039   13583 main.go:141] libmachine: Decoding PEM data...
	I0314 11:19:53.085051   13583 main.go:141] libmachine: Parsing certificate...
	I0314 11:19:53.085095   13583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:19:53.085116   13583 main.go:141] libmachine: Decoding PEM data...
	I0314 11:19:53.085123   13583 main.go:141] libmachine: Parsing certificate...
	I0314 11:19:53.085468   13583 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:19:53.226454   13583 main.go:141] libmachine: Creating SSH key...
	I0314 11:19:53.276099   13583 main.go:141] libmachine: Creating Disk image...
	I0314 11:19:53.276104   13583 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:19:53.276272   13583 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2
	I0314 11:19:53.288269   13583 main.go:141] libmachine: STDOUT: 
	I0314 11:19:53.288301   13583 main.go:141] libmachine: STDERR: 
	I0314 11:19:53.288351   13583 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2 +20000M
	I0314 11:19:53.304639   13583 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:19:53.304658   13583 main.go:141] libmachine: STDERR: 
	I0314 11:19:53.304680   13583 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2
	I0314 11:19:53.304685   13583 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:19:53.304713   13583 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:81:a0:b6:7d:26 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2
	I0314 11:19:53.306545   13583 main.go:141] libmachine: STDOUT: 
	I0314 11:19:53.306561   13583 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:19:53.306581   13583 client.go:171] duration metric: took 221.624042ms to LocalClient.Create
	I0314 11:19:55.308857   13583 start.go:128] duration metric: took 2.246954375s to createHost
	I0314 11:19:55.308949   13583 start.go:83] releasing machines lock for "kindnet-912000", held for 2.2471235s
	W0314 11:19:55.309002   13583 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:19:55.318771   13583 out.go:177] * Deleting "kindnet-912000" in qemu2 ...
	W0314 11:19:55.346295   13583 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:19:55.346334   13583 start.go:728] Will try again in 5 seconds ...
	I0314 11:20:00.348603   13583 start.go:360] acquireMachinesLock for kindnet-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:20:00.349096   13583 start.go:364] duration metric: took 386.166µs to acquireMachinesLock for "kindnet-912000"
	I0314 11:20:00.349230   13583 start.go:93] Provisioning new machine with config: &{Name:kindnet-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:20:00.349444   13583 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:20:00.359029   13583 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:20:00.408432   13583 start.go:159] libmachine.API.Create for "kindnet-912000" (driver="qemu2")
	I0314 11:20:00.408486   13583 client.go:168] LocalClient.Create starting
	I0314 11:20:00.408585   13583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:20:00.408646   13583 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:00.408666   13583 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:00.408723   13583 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:20:00.408764   13583 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:00.408776   13583 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:00.409280   13583 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:20:00.568639   13583 main.go:141] libmachine: Creating SSH key...
	I0314 11:20:00.664060   13583 main.go:141] libmachine: Creating Disk image...
	I0314 11:20:00.664067   13583 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:20:00.664288   13583 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2
	I0314 11:20:00.677466   13583 main.go:141] libmachine: STDOUT: 
	I0314 11:20:00.677492   13583 main.go:141] libmachine: STDERR: 
	I0314 11:20:00.677555   13583 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2 +20000M
	I0314 11:20:00.688614   13583 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:20:00.688632   13583 main.go:141] libmachine: STDERR: 
	I0314 11:20:00.688646   13583 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2
	I0314 11:20:00.688653   13583 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:20:00.688694   13583 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ad:f5:ed:a9:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kindnet-912000/disk.qcow2
	I0314 11:20:00.690457   13583 main.go:141] libmachine: STDOUT: 
	I0314 11:20:00.690470   13583 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:20:00.690482   13583 client.go:171] duration metric: took 281.990916ms to LocalClient.Create
	I0314 11:20:02.692685   13583 start.go:128] duration metric: took 2.34316125s to createHost
	I0314 11:20:02.692772   13583 start.go:83] releasing machines lock for "kindnet-912000", held for 2.343644083s
	W0314 11:20:02.693061   13583 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:02.703763   13583 out.go:177] 
	W0314 11:20:02.711059   13583 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:20:02.711125   13583 out.go:239] * 
	* 
	W0314 11:20:02.713752   13583 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:20:02.723718   13583 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.761712s)

                                                
                                                
-- stdout --
	* [calico-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-912000" primary control-plane node in "calico-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:20:05.174578   13706 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:20:05.174699   13706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:20:05.174702   13706 out.go:304] Setting ErrFile to fd 2...
	I0314 11:20:05.174704   13706 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:20:05.174813   13706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:20:05.175830   13706 out.go:298] Setting JSON to false
	I0314 11:20:05.191809   13706 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8377,"bootTime":1710432028,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:20:05.191883   13706 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:20:05.198291   13706 out.go:177] * [calico-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:20:05.205113   13706 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:20:05.205167   13706 notify.go:220] Checking for updates...
	I0314 11:20:05.210201   13706 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:20:05.213274   13706 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:20:05.216126   13706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:20:05.219167   13706 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:20:05.222247   13706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:20:05.224035   13706 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:20:05.224101   13706 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:20:05.224155   13706 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:20:05.228213   13706 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:20:05.235044   13706 start.go:297] selected driver: qemu2
	I0314 11:20:05.235050   13706 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:20:05.235055   13706 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:20:05.237133   13706 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:20:05.240193   13706 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:20:05.243291   13706 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:20:05.243325   13706 cni.go:84] Creating CNI manager for "calico"
	I0314 11:20:05.243332   13706 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0314 11:20:05.243363   13706 start.go:340] cluster config:
	{Name:calico-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:20:05.247316   13706 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:20:05.254137   13706 out.go:177] * Starting "calico-912000" primary control-plane node in "calico-912000" cluster
	I0314 11:20:05.258171   13706 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:20:05.258192   13706 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:20:05.258208   13706 cache.go:56] Caching tarball of preloaded images
	I0314 11:20:05.258258   13706 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:20:05.258263   13706 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:20:05.258311   13706 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/calico-912000/config.json ...
	I0314 11:20:05.258320   13706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/calico-912000/config.json: {Name:mk798b8c2836e36901104f4955d91798fb5c4904 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:20:05.258509   13706 start.go:360] acquireMachinesLock for calico-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:20:05.258537   13706 start.go:364] duration metric: took 22.709µs to acquireMachinesLock for "calico-912000"
	I0314 11:20:05.258549   13706 start.go:93] Provisioning new machine with config: &{Name:calico-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:20:05.258577   13706 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:20:05.267221   13706 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:20:05.281615   13706 start.go:159] libmachine.API.Create for "calico-912000" (driver="qemu2")
	I0314 11:20:05.281651   13706 client.go:168] LocalClient.Create starting
	I0314 11:20:05.281705   13706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:20:05.281733   13706 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:05.281744   13706 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:05.281798   13706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:20:05.281819   13706 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:05.281824   13706 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:05.282175   13706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:20:05.423306   13706 main.go:141] libmachine: Creating SSH key...
	I0314 11:20:05.452400   13706 main.go:141] libmachine: Creating Disk image...
	I0314 11:20:05.452406   13706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:20:05.452588   13706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2
	I0314 11:20:05.464285   13706 main.go:141] libmachine: STDOUT: 
	I0314 11:20:05.464304   13706 main.go:141] libmachine: STDERR: 
	I0314 11:20:05.464360   13706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2 +20000M
	I0314 11:20:05.475332   13706 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:20:05.475353   13706 main.go:141] libmachine: STDERR: 
	I0314 11:20:05.475371   13706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2
	I0314 11:20:05.475375   13706 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:20:05.475411   13706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:a0:89:0a:a9:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2
	I0314 11:20:05.477213   13706 main.go:141] libmachine: STDOUT: 
	I0314 11:20:05.477227   13706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:20:05.477247   13706 client.go:171] duration metric: took 195.589416ms to LocalClient.Create
	I0314 11:20:07.479603   13706 start.go:128] duration metric: took 2.220984125s to createHost
	I0314 11:20:07.479723   13706 start.go:83] releasing machines lock for "calico-912000", held for 2.221168458s
	W0314 11:20:07.479777   13706 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:07.490548   13706 out.go:177] * Deleting "calico-912000" in qemu2 ...
	W0314 11:20:07.522177   13706 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:07.522215   13706 start.go:728] Will try again in 5 seconds ...
	I0314 11:20:12.524461   13706 start.go:360] acquireMachinesLock for calico-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:20:12.524930   13706 start.go:364] duration metric: took 345.833µs to acquireMachinesLock for "calico-912000"
	I0314 11:20:12.524997   13706 start.go:93] Provisioning new machine with config: &{Name:calico-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:20:12.525269   13706 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:20:12.532854   13706 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:20:12.579278   13706 start.go:159] libmachine.API.Create for "calico-912000" (driver="qemu2")
	I0314 11:20:12.579381   13706 client.go:168] LocalClient.Create starting
	I0314 11:20:12.579518   13706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:20:12.579583   13706 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:12.579611   13706 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:12.579667   13706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:20:12.579708   13706 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:12.579723   13706 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:12.580257   13706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:20:12.730219   13706 main.go:141] libmachine: Creating SSH key...
	I0314 11:20:12.835647   13706 main.go:141] libmachine: Creating Disk image...
	I0314 11:20:12.835653   13706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:20:12.835873   13706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2
	I0314 11:20:12.848713   13706 main.go:141] libmachine: STDOUT: 
	I0314 11:20:12.848736   13706 main.go:141] libmachine: STDERR: 
	I0314 11:20:12.848785   13706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2 +20000M
	I0314 11:20:12.859250   13706 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:20:12.859267   13706 main.go:141] libmachine: STDERR: 
	I0314 11:20:12.859275   13706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2
	I0314 11:20:12.859280   13706 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:20:12.859313   13706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:1b:99:67:aa:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/calico-912000/disk.qcow2
	I0314 11:20:12.860912   13706 main.go:141] libmachine: STDOUT: 
	I0314 11:20:12.860927   13706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:20:12.860947   13706 client.go:171] duration metric: took 281.55825ms to LocalClient.Create
	I0314 11:20:14.863160   13706 start.go:128] duration metric: took 2.337844333s to createHost
	I0314 11:20:14.863329   13706 start.go:83] releasing machines lock for "calico-912000", held for 2.338296459s
	W0314 11:20:14.863598   13706 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:14.873315   13706 out.go:177] 
	W0314 11:20:14.880389   13706 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:20:14.880438   13706 out.go:239] * 
	* 
	W0314 11:20:14.883082   13706 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:20:14.892332   13706 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.739756875s)

                                                
                                                
-- stdout --
	* [custom-flannel-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-912000" primary control-plane node in "custom-flannel-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:20:17.468301   13826 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:20:17.468438   13826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:20:17.468442   13826 out.go:304] Setting ErrFile to fd 2...
	I0314 11:20:17.468445   13826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:20:17.468576   13826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:20:17.469703   13826 out.go:298] Setting JSON to false
	I0314 11:20:17.485780   13826 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8389,"bootTime":1710432028,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:20:17.485875   13826 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:20:17.492732   13826 out.go:177] * [custom-flannel-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:20:17.500719   13826 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:20:17.503761   13826 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:20:17.500750   13826 notify.go:220] Checking for updates...
	I0314 11:20:17.509674   13826 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:20:17.512734   13826 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:20:17.514190   13826 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:20:17.517706   13826 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:20:17.521022   13826 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:20:17.521089   13826 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:20:17.521134   13826 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:20:17.525519   13826 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:20:17.532732   13826 start.go:297] selected driver: qemu2
	I0314 11:20:17.532738   13826 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:20:17.532745   13826 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:20:17.535105   13826 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:20:17.538731   13826 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:20:17.541766   13826 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:20:17.541784   13826 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0314 11:20:17.541797   13826 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0314 11:20:17.541831   13826 start.go:340] cluster config:
	{Name:custom-flannel-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:20:17.546460   13826 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:20:17.552709   13826 out.go:177] * Starting "custom-flannel-912000" primary control-plane node in "custom-flannel-912000" cluster
	I0314 11:20:17.556688   13826 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:20:17.556707   13826 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:20:17.556718   13826 cache.go:56] Caching tarball of preloaded images
	I0314 11:20:17.556765   13826 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:20:17.556771   13826 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:20:17.556830   13826 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/custom-flannel-912000/config.json ...
	I0314 11:20:17.556840   13826 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/custom-flannel-912000/config.json: {Name:mk7442855ae39835f857d6ebeec09ac16a489eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:20:17.557051   13826 start.go:360] acquireMachinesLock for custom-flannel-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:20:17.557084   13826 start.go:364] duration metric: took 24.583µs to acquireMachinesLock for "custom-flannel-912000"
	I0314 11:20:17.557096   13826 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:20:17.557127   13826 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:20:17.565682   13826 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:20:17.581325   13826 start.go:159] libmachine.API.Create for "custom-flannel-912000" (driver="qemu2")
	I0314 11:20:17.581350   13826 client.go:168] LocalClient.Create starting
	I0314 11:20:17.581400   13826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:20:17.581430   13826 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:17.581444   13826 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:17.581487   13826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:20:17.581507   13826 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:17.581520   13826 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:17.581858   13826 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:20:17.724823   13826 main.go:141] libmachine: Creating SSH key...
	I0314 11:20:17.769249   13826 main.go:141] libmachine: Creating Disk image...
	I0314 11:20:17.769255   13826 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:20:17.769439   13826 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2
	I0314 11:20:17.781659   13826 main.go:141] libmachine: STDOUT: 
	I0314 11:20:17.781680   13826 main.go:141] libmachine: STDERR: 
	I0314 11:20:17.781742   13826 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2 +20000M
	I0314 11:20:17.792350   13826 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:20:17.792370   13826 main.go:141] libmachine: STDERR: 
	I0314 11:20:17.792394   13826 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2
	I0314 11:20:17.792397   13826 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:20:17.792427   13826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:b2:ea:1d:ae:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2
	I0314 11:20:17.794242   13826 main.go:141] libmachine: STDOUT: 
	I0314 11:20:17.794257   13826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:20:17.794279   13826 client.go:171] duration metric: took 212.920209ms to LocalClient.Create
	I0314 11:20:19.796413   13826 start.go:128] duration metric: took 2.239247459s to createHost
	I0314 11:20:19.796488   13826 start.go:83] releasing machines lock for "custom-flannel-912000", held for 2.239388833s
	W0314 11:20:19.796553   13826 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:19.805335   13826 out.go:177] * Deleting "custom-flannel-912000" in qemu2 ...
	W0314 11:20:19.829374   13826 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:19.829396   13826 start.go:728] Will try again in 5 seconds ...
	I0314 11:20:24.831621   13826 start.go:360] acquireMachinesLock for custom-flannel-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:20:24.832169   13826 start.go:364] duration metric: took 407.208µs to acquireMachinesLock for "custom-flannel-912000"
	I0314 11:20:24.832326   13826 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:20:24.832595   13826 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:20:24.842227   13826 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:20:24.889447   13826 start.go:159] libmachine.API.Create for "custom-flannel-912000" (driver="qemu2")
	I0314 11:20:24.889511   13826 client.go:168] LocalClient.Create starting
	I0314 11:20:24.889625   13826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:20:24.889691   13826 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:24.889711   13826 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:24.889774   13826 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:20:24.889827   13826 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:24.889840   13826 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:24.890405   13826 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:20:25.044662   13826 main.go:141] libmachine: Creating SSH key...
	I0314 11:20:25.103003   13826 main.go:141] libmachine: Creating Disk image...
	I0314 11:20:25.103011   13826 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:20:25.103270   13826 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2
	I0314 11:20:25.117021   13826 main.go:141] libmachine: STDOUT: 
	I0314 11:20:25.117045   13826 main.go:141] libmachine: STDERR: 
	I0314 11:20:25.117111   13826 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2 +20000M
	I0314 11:20:25.129333   13826 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:20:25.129356   13826 main.go:141] libmachine: STDERR: 
	I0314 11:20:25.129369   13826 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2
	I0314 11:20:25.129375   13826 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:20:25.129419   13826 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:d1:bf:6e:5d:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/custom-flannel-912000/disk.qcow2
	I0314 11:20:25.131461   13826 main.go:141] libmachine: STDOUT: 
	I0314 11:20:25.131480   13826 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:20:25.131492   13826 client.go:171] duration metric: took 241.970958ms to LocalClient.Create
	I0314 11:20:27.133923   13826 start.go:128] duration metric: took 2.301269459s to createHost
	I0314 11:20:27.134037   13826 start.go:83] releasing machines lock for "custom-flannel-912000", held for 2.301835375s
	W0314 11:20:27.134430   13826 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:27.144046   13826 out.go:177] 
	W0314 11:20:27.151102   13826 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:20:27.151136   13826 out.go:239] * 
	* 
	W0314 11:20:27.153443   13826 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:20:27.162989   13826 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.818560542s)

                                                
                                                
-- stdout --
	* [false-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-912000" primary control-plane node in "false-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:20:29.604334   13947 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:20:29.604456   13947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:20:29.604460   13947 out.go:304] Setting ErrFile to fd 2...
	I0314 11:20:29.604462   13947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:20:29.604583   13947 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:20:29.605615   13947 out.go:298] Setting JSON to false
	I0314 11:20:29.621842   13947 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8401,"bootTime":1710432028,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:20:29.621908   13947 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:20:29.629146   13947 out.go:177] * [false-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:20:29.636095   13947 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:20:29.636149   13947 notify.go:220] Checking for updates...
	I0314 11:20:29.643118   13947 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:20:29.646022   13947 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:20:29.649099   13947 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:20:29.652123   13947 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:20:29.655033   13947 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:20:29.658365   13947 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:20:29.658433   13947 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:20:29.658485   13947 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:20:29.663011   13947 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:20:29.670053   13947 start.go:297] selected driver: qemu2
	I0314 11:20:29.670058   13947 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:20:29.670063   13947 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:20:29.672364   13947 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:20:29.675058   13947 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:20:29.678123   13947 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:20:29.678141   13947 cni.go:84] Creating CNI manager for "false"
	I0314 11:20:29.678163   13947 start.go:340] cluster config:
	{Name:false-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:20:29.682447   13947 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:20:29.690106   13947 out.go:177] * Starting "false-912000" primary control-plane node in "false-912000" cluster
	I0314 11:20:29.693960   13947 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:20:29.693973   13947 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:20:29.693984   13947 cache.go:56] Caching tarball of preloaded images
	I0314 11:20:29.694036   13947 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:20:29.694042   13947 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:20:29.694094   13947 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/false-912000/config.json ...
	I0314 11:20:29.694104   13947 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/false-912000/config.json: {Name:mk859555f22a624dc3ebfdd9b5df0495da7d9375 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:20:29.694308   13947 start.go:360] acquireMachinesLock for false-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:20:29.694339   13947 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "false-912000"
	I0314 11:20:29.694351   13947 start.go:93] Provisioning new machine with config: &{Name:false-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:20:29.694384   13947 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:20:29.698046   13947 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:20:29.713941   13947 start.go:159] libmachine.API.Create for "false-912000" (driver="qemu2")
	I0314 11:20:29.713969   13947 client.go:168] LocalClient.Create starting
	I0314 11:20:29.714026   13947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:20:29.714053   13947 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:29.714062   13947 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:29.714102   13947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:20:29.714123   13947 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:29.714130   13947 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:29.714467   13947 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:20:29.856986   13947 main.go:141] libmachine: Creating SSH key...
	I0314 11:20:29.979163   13947 main.go:141] libmachine: Creating Disk image...
	I0314 11:20:29.979171   13947 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:20:29.979367   13947 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2
	I0314 11:20:29.991621   13947 main.go:141] libmachine: STDOUT: 
	I0314 11:20:29.991641   13947 main.go:141] libmachine: STDERR: 
	I0314 11:20:29.991692   13947 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2 +20000M
	I0314 11:20:30.002591   13947 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:20:30.002616   13947 main.go:141] libmachine: STDERR: 
	I0314 11:20:30.002632   13947 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2
	I0314 11:20:30.002638   13947 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:20:30.002666   13947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:e5:6b:2f:dc:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2
	I0314 11:20:30.004325   13947 main.go:141] libmachine: STDOUT: 
	I0314 11:20:30.004338   13947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:20:30.004355   13947 client.go:171] duration metric: took 290.379833ms to LocalClient.Create
	I0314 11:20:32.004964   13947 start.go:128] duration metric: took 2.31054125s to createHost
	I0314 11:20:32.005026   13947 start.go:83] releasing machines lock for "false-912000", held for 2.310672666s
	W0314 11:20:32.005080   13947 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:32.018654   13947 out.go:177] * Deleting "false-912000" in qemu2 ...
	W0314 11:20:32.040997   13947 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:32.041034   13947 start.go:728] Will try again in 5 seconds ...
	I0314 11:20:37.043345   13947 start.go:360] acquireMachinesLock for false-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:20:37.043878   13947 start.go:364] duration metric: took 423.125µs to acquireMachinesLock for "false-912000"
	I0314 11:20:37.043984   13947 start.go:93] Provisioning new machine with config: &{Name:false-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:20:37.044259   13947 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:20:37.049653   13947 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:20:37.099399   13947 start.go:159] libmachine.API.Create for "false-912000" (driver="qemu2")
	I0314 11:20:37.099469   13947 client.go:168] LocalClient.Create starting
	I0314 11:20:37.099596   13947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:20:37.099679   13947 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:37.099695   13947 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:37.099757   13947 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:20:37.099799   13947 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:37.099816   13947 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:37.100324   13947 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:20:37.253918   13947 main.go:141] libmachine: Creating SSH key...
	I0314 11:20:37.324286   13947 main.go:141] libmachine: Creating Disk image...
	I0314 11:20:37.324302   13947 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:20:37.324540   13947 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2
	I0314 11:20:37.337422   13947 main.go:141] libmachine: STDOUT: 
	I0314 11:20:37.337446   13947 main.go:141] libmachine: STDERR: 
	I0314 11:20:37.337518   13947 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2 +20000M
	I0314 11:20:37.348349   13947 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:20:37.348379   13947 main.go:141] libmachine: STDERR: 
	I0314 11:20:37.348394   13947 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2
	I0314 11:20:37.348399   13947 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:20:37.348449   13947 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:6d:c8:4f:4a:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/false-912000/disk.qcow2
	I0314 11:20:37.350238   13947 main.go:141] libmachine: STDOUT: 
	I0314 11:20:37.350253   13947 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:20:37.350270   13947 client.go:171] duration metric: took 250.794959ms to LocalClient.Create
	I0314 11:20:39.352440   13947 start.go:128] duration metric: took 2.308123833s to createHost
	I0314 11:20:39.352500   13947 start.go:83] releasing machines lock for "false-912000", held for 2.308593375s
	W0314 11:20:39.352842   13947 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:39.361435   13947 out.go:177] 
	W0314 11:20:39.366525   13947 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:20:39.366550   13947 out.go:239] * 
	* 
	W0314 11:20:39.368432   13947 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:20:39.378412   13947 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.826055458s)

                                                
                                                
-- stdout --
	* [enable-default-cni-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-912000" primary control-plane node in "enable-default-cni-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:20:41.666295   14060 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:20:41.666440   14060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:20:41.666443   14060 out.go:304] Setting ErrFile to fd 2...
	I0314 11:20:41.666445   14060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:20:41.666569   14060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:20:41.667679   14060 out.go:298] Setting JSON to false
	I0314 11:20:41.683775   14060 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8413,"bootTime":1710432028,"procs":381,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:20:41.683857   14060 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:20:41.690677   14060 out.go:177] * [enable-default-cni-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:20:41.698740   14060 notify.go:220] Checking for updates...
	I0314 11:20:41.698768   14060 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:20:41.702649   14060 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:20:41.705633   14060 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:20:41.708639   14060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:20:41.711621   14060 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:20:41.714607   14060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:20:41.717970   14060 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:20:41.718036   14060 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:20:41.718086   14060 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:20:41.721537   14060 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:20:41.728655   14060 start.go:297] selected driver: qemu2
	I0314 11:20:41.728661   14060 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:20:41.728666   14060 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:20:41.731016   14060 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:20:41.734575   14060 out.go:177] * Automatically selected the socket_vmnet network
	E0314 11:20:41.738733   14060 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0314 11:20:41.738746   14060 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:20:41.738785   14060 cni.go:84] Creating CNI manager for "bridge"
	I0314 11:20:41.738789   14060 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:20:41.738815   14060 start.go:340] cluster config:
	{Name:enable-default-cni-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:20:41.743005   14060 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:20:41.750694   14060 out.go:177] * Starting "enable-default-cni-912000" primary control-plane node in "enable-default-cni-912000" cluster
	I0314 11:20:41.754672   14060 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:20:41.754687   14060 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:20:41.754698   14060 cache.go:56] Caching tarball of preloaded images
	I0314 11:20:41.754749   14060 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:20:41.754754   14060 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:20:41.754824   14060 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/enable-default-cni-912000/config.json ...
	I0314 11:20:41.754835   14060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/enable-default-cni-912000/config.json: {Name:mk3cf718d42b23b664f0ccae152ac16932e5c518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:20:41.755189   14060 start.go:360] acquireMachinesLock for enable-default-cni-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:20:41.755221   14060 start.go:364] duration metric: took 23.667µs to acquireMachinesLock for "enable-default-cni-912000"
	I0314 11:20:41.755233   14060 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:20:41.755268   14060 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:20:41.763630   14060 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:20:41.777943   14060 start.go:159] libmachine.API.Create for "enable-default-cni-912000" (driver="qemu2")
	I0314 11:20:41.777969   14060 client.go:168] LocalClient.Create starting
	I0314 11:20:41.778021   14060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:20:41.778049   14060 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:41.778059   14060 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:41.778100   14060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:20:41.778121   14060 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:41.778127   14060 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:41.778449   14060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:20:41.920659   14060 main.go:141] libmachine: Creating SSH key...
	I0314 11:20:42.023923   14060 main.go:141] libmachine: Creating Disk image...
	I0314 11:20:42.023935   14060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:20:42.024140   14060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2
	I0314 11:20:42.036606   14060 main.go:141] libmachine: STDOUT: 
	I0314 11:20:42.036629   14060 main.go:141] libmachine: STDERR: 
	I0314 11:20:42.036695   14060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2 +20000M
	I0314 11:20:42.047436   14060 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:20:42.047455   14060 main.go:141] libmachine: STDERR: 
	I0314 11:20:42.047468   14060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2
	I0314 11:20:42.047474   14060 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:20:42.047502   14060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:32:41:86:25:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2
	I0314 11:20:42.049286   14060 main.go:141] libmachine: STDOUT: 
	I0314 11:20:42.049305   14060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:20:42.049322   14060 client.go:171] duration metric: took 271.346583ms to LocalClient.Create
	I0314 11:20:44.051578   14060 start.go:128] duration metric: took 2.296263792s to createHost
	I0314 11:20:44.051682   14060 start.go:83] releasing machines lock for "enable-default-cni-912000", held for 2.296444458s
	W0314 11:20:44.051736   14060 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:44.061774   14060 out.go:177] * Deleting "enable-default-cni-912000" in qemu2 ...
	W0314 11:20:44.086498   14060 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:44.086525   14060 start.go:728] Will try again in 5 seconds ...
	I0314 11:20:49.088640   14060 start.go:360] acquireMachinesLock for enable-default-cni-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:20:49.088818   14060 start.go:364] duration metric: took 151.375µs to acquireMachinesLock for "enable-default-cni-912000"
	I0314 11:20:49.088864   14060 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:20:49.088949   14060 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:20:49.097854   14060 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:20:49.121409   14060 start.go:159] libmachine.API.Create for "enable-default-cni-912000" (driver="qemu2")
	I0314 11:20:49.121441   14060 client.go:168] LocalClient.Create starting
	I0314 11:20:49.121514   14060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:20:49.121563   14060 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:49.121575   14060 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:49.121621   14060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:20:49.121647   14060 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:49.121655   14060 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:49.121941   14060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:20:49.267253   14060 main.go:141] libmachine: Creating SSH key...
	I0314 11:20:49.396354   14060 main.go:141] libmachine: Creating Disk image...
	I0314 11:20:49.396364   14060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:20:49.396570   14060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2
	I0314 11:20:49.408755   14060 main.go:141] libmachine: STDOUT: 
	I0314 11:20:49.408781   14060 main.go:141] libmachine: STDERR: 
	I0314 11:20:49.408853   14060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2 +20000M
	I0314 11:20:49.419778   14060 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:20:49.419792   14060 main.go:141] libmachine: STDERR: 
	I0314 11:20:49.419806   14060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2
	I0314 11:20:49.419814   14060 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:20:49.419851   14060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:f9:8b:ef:fd:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/enable-default-cni-912000/disk.qcow2
	I0314 11:20:49.421742   14060 main.go:141] libmachine: STDOUT: 
	I0314 11:20:49.421758   14060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:20:49.421775   14060 client.go:171] duration metric: took 300.328042ms to LocalClient.Create
	I0314 11:20:51.423853   14060 start.go:128] duration metric: took 2.334886875s to createHost
	I0314 11:20:51.423880   14060 start.go:83] releasing machines lock for "enable-default-cni-912000", held for 2.335043208s
	W0314 11:20:51.424016   14060 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:51.434342   14060 out.go:177] 
	W0314 11:20:51.441337   14060 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:20:51.441342   14060 out.go:239] * 
	* 
	W0314 11:20:51.441791   14060 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:20:51.453323   14060 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.886034s)

                                                
                                                
-- stdout --
	* [flannel-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-912000" primary control-plane node in "flannel-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:20:53.648385   14170 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:20:53.648523   14170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:20:53.648526   14170 out.go:304] Setting ErrFile to fd 2...
	I0314 11:20:53.648529   14170 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:20:53.648661   14170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:20:53.649733   14170 out.go:298] Setting JSON to false
	I0314 11:20:53.665501   14170 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8425,"bootTime":1710432028,"procs":380,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:20:53.665569   14170 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:20:53.671772   14170 out.go:177] * [flannel-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:20:53.684506   14170 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:20:53.679742   14170 notify.go:220] Checking for updates...
	I0314 11:20:53.690716   14170 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:20:53.693733   14170 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:20:53.696669   14170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:20:53.699702   14170 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:20:53.702706   14170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:20:53.705946   14170 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:20:53.706013   14170 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:20:53.706061   14170 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:20:53.709673   14170 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:20:53.716746   14170 start.go:297] selected driver: qemu2
	I0314 11:20:53.716751   14170 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:20:53.716763   14170 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:20:53.719093   14170 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:20:53.722731   14170 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:20:53.725867   14170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:20:53.725892   14170 cni.go:84] Creating CNI manager for "flannel"
	I0314 11:20:53.725903   14170 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0314 11:20:53.725933   14170 start.go:340] cluster config:
	{Name:flannel-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:20:53.730153   14170 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:20:53.737738   14170 out.go:177] * Starting "flannel-912000" primary control-plane node in "flannel-912000" cluster
	I0314 11:20:53.741506   14170 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:20:53.741518   14170 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:20:53.741527   14170 cache.go:56] Caching tarball of preloaded images
	I0314 11:20:53.741585   14170 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:20:53.741590   14170 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:20:53.741644   14170 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/flannel-912000/config.json ...
	I0314 11:20:53.741655   14170 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/flannel-912000/config.json: {Name:mk6787a78d983714d5490583f1b0278f54694916 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:20:53.741852   14170 start.go:360] acquireMachinesLock for flannel-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:20:53.741881   14170 start.go:364] duration metric: took 23.208µs to acquireMachinesLock for "flannel-912000"
	I0314 11:20:53.741893   14170 start.go:93] Provisioning new machine with config: &{Name:flannel-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:20:53.741919   14170 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:20:53.750719   14170 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:20:53.765072   14170 start.go:159] libmachine.API.Create for "flannel-912000" (driver="qemu2")
	I0314 11:20:53.765100   14170 client.go:168] LocalClient.Create starting
	I0314 11:20:53.765153   14170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:20:53.765181   14170 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:53.765205   14170 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:53.765257   14170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:20:53.765278   14170 main.go:141] libmachine: Decoding PEM data...
	I0314 11:20:53.765283   14170 main.go:141] libmachine: Parsing certificate...
	I0314 11:20:53.765637   14170 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:20:53.906437   14170 main.go:141] libmachine: Creating SSH key...
	I0314 11:20:54.003883   14170 main.go:141] libmachine: Creating Disk image...
	I0314 11:20:54.003890   14170 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:20:54.004106   14170 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2
	I0314 11:20:54.016214   14170 main.go:141] libmachine: STDOUT: 
	I0314 11:20:54.016247   14170 main.go:141] libmachine: STDERR: 
	I0314 11:20:54.016306   14170 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2 +20000M
	I0314 11:20:54.027104   14170 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:20:54.027118   14170 main.go:141] libmachine: STDERR: 
	I0314 11:20:54.027138   14170 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2
	I0314 11:20:54.027142   14170 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:20:54.027172   14170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:2d:2f:59:ad:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2
	I0314 11:20:54.028900   14170 main.go:141] libmachine: STDOUT: 
	I0314 11:20:54.028914   14170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:20:54.028931   14170 client.go:171] duration metric: took 263.825375ms to LocalClient.Create
	I0314 11:20:56.031151   14170 start.go:128] duration metric: took 2.2892035s to createHost
	I0314 11:20:56.031246   14170 start.go:83] releasing machines lock for "flannel-912000", held for 2.289328792s
	W0314 11:20:56.031297   14170 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:56.040997   14170 out.go:177] * Deleting "flannel-912000" in qemu2 ...
	W0314 11:20:56.059876   14170 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:20:56.059900   14170 start.go:728] Will try again in 5 seconds ...
	I0314 11:21:01.060735   14170 start.go:360] acquireMachinesLock for flannel-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:01.061197   14170 start.go:364] duration metric: took 375.458µs to acquireMachinesLock for "flannel-912000"
	I0314 11:21:01.061259   14170 start.go:93] Provisioning new machine with config: &{Name:flannel-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:01.061464   14170 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:01.070995   14170 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:21:01.115405   14170 start.go:159] libmachine.API.Create for "flannel-912000" (driver="qemu2")
	I0314 11:21:01.115456   14170 client.go:168] LocalClient.Create starting
	I0314 11:21:01.115564   14170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:01.115624   14170 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:01.115637   14170 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:01.115709   14170 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:01.115750   14170 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:01.115761   14170 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:01.116230   14170 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:01.266179   14170 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:01.432111   14170 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:01.432123   14170 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:01.432377   14170 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2
	I0314 11:21:01.445178   14170 main.go:141] libmachine: STDOUT: 
	I0314 11:21:01.445201   14170 main.go:141] libmachine: STDERR: 
	I0314 11:21:01.445256   14170 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2 +20000M
	I0314 11:21:01.456518   14170 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:01.456536   14170 main.go:141] libmachine: STDERR: 
	I0314 11:21:01.456546   14170 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2
	I0314 11:21:01.456550   14170 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:01.456592   14170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:30:27:fa:c1:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/flannel-912000/disk.qcow2
	I0314 11:21:01.458398   14170 main.go:141] libmachine: STDOUT: 
	I0314 11:21:01.458418   14170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:01.458437   14170 client.go:171] duration metric: took 342.971792ms to LocalClient.Create
	I0314 11:21:03.460639   14170 start.go:128] duration metric: took 2.39912825s to createHost
	I0314 11:21:03.460717   14170 start.go:83] releasing machines lock for "flannel-912000", held for 2.399491583s
	W0314 11:21:03.461199   14170 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:03.473910   14170 out.go:177] 
	W0314 11:21:03.478824   14170 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:21:03.478851   14170 out.go:239] * 
	* 
	W0314 11:21:03.481530   14170 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:21:03.490819   14170 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.73425575s)

                                                
                                                
-- stdout --
	* [bridge-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-912000" primary control-plane node in "bridge-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:21:06.000828   14295 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:21:06.000965   14295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:06.000968   14295 out.go:304] Setting ErrFile to fd 2...
	I0314 11:21:06.000970   14295 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:06.001098   14295 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:21:06.002121   14295 out.go:298] Setting JSON to false
	I0314 11:21:06.018118   14295 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8438,"bootTime":1710432028,"procs":381,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:21:06.018180   14295 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:21:06.023756   14295 out.go:177] * [bridge-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:21:06.031727   14295 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:21:06.031787   14295 notify.go:220] Checking for updates...
	I0314 11:21:06.041741   14295 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:21:06.044677   14295 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:21:06.047717   14295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:21:06.050700   14295 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:21:06.053657   14295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:21:06.057040   14295 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:21:06.057109   14295 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:21:06.057154   14295 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:21:06.061749   14295 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:21:06.068732   14295 start.go:297] selected driver: qemu2
	I0314 11:21:06.068738   14295 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:21:06.068743   14295 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:21:06.071024   14295 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:21:06.073732   14295 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:21:06.076802   14295 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:21:06.076834   14295 cni.go:84] Creating CNI manager for "bridge"
	I0314 11:21:06.076839   14295 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:21:06.076875   14295 start.go:340] cluster config:
	{Name:bridge-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:21:06.081303   14295 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:06.088707   14295 out.go:177] * Starting "bridge-912000" primary control-plane node in "bridge-912000" cluster
	I0314 11:21:06.092753   14295 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:21:06.092772   14295 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:21:06.092791   14295 cache.go:56] Caching tarball of preloaded images
	I0314 11:21:06.092859   14295 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:21:06.092865   14295 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:21:06.092932   14295 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/bridge-912000/config.json ...
	I0314 11:21:06.092943   14295 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/bridge-912000/config.json: {Name:mk15966ed2c3c9a25258f7423881e38dcd4ae3e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:21:06.093171   14295 start.go:360] acquireMachinesLock for bridge-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:06.093204   14295 start.go:364] duration metric: took 26.875µs to acquireMachinesLock for "bridge-912000"
	I0314 11:21:06.093217   14295 start.go:93] Provisioning new machine with config: &{Name:bridge-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:06.093256   14295 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:06.097761   14295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:21:06.114857   14295 start.go:159] libmachine.API.Create for "bridge-912000" (driver="qemu2")
	I0314 11:21:06.114890   14295 client.go:168] LocalClient.Create starting
	I0314 11:21:06.114958   14295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:06.114992   14295 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:06.115000   14295 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:06.115049   14295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:06.115072   14295 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:06.115081   14295 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:06.115452   14295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:06.256424   14295 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:06.309208   14295 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:06.309215   14295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:06.309450   14295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2
	I0314 11:21:06.322197   14295 main.go:141] libmachine: STDOUT: 
	I0314 11:21:06.322235   14295 main.go:141] libmachine: STDERR: 
	I0314 11:21:06.322294   14295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2 +20000M
	I0314 11:21:06.333159   14295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:06.333184   14295 main.go:141] libmachine: STDERR: 
	I0314 11:21:06.333204   14295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2
	I0314 11:21:06.333209   14295 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:06.333236   14295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:93:f2:44:f0:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2
	I0314 11:21:06.334910   14295 main.go:141] libmachine: STDOUT: 
	I0314 11:21:06.334930   14295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:06.334951   14295 client.go:171] duration metric: took 220.054959ms to LocalClient.Create
	I0314 11:21:08.337111   14295 start.go:128] duration metric: took 2.243833167s to createHost
	I0314 11:21:08.337167   14295 start.go:83] releasing machines lock for "bridge-912000", held for 2.2439475s
	W0314 11:21:08.337221   14295 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:08.347446   14295 out.go:177] * Deleting "bridge-912000" in qemu2 ...
	W0314 11:21:08.369331   14295 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:08.369344   14295 start.go:728] Will try again in 5 seconds ...
	I0314 11:21:13.371414   14295 start.go:360] acquireMachinesLock for bridge-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:13.371504   14295 start.go:364] duration metric: took 70.75µs to acquireMachinesLock for "bridge-912000"
	I0314 11:21:13.371517   14295 start.go:93] Provisioning new machine with config: &{Name:bridge-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:13.371564   14295 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:13.380964   14295 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:21:13.396320   14295 start.go:159] libmachine.API.Create for "bridge-912000" (driver="qemu2")
	I0314 11:21:13.396351   14295 client.go:168] LocalClient.Create starting
	I0314 11:21:13.396436   14295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:13.396470   14295 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:13.396479   14295 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:13.396515   14295 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:13.396544   14295 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:13.396552   14295 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:13.397097   14295 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:13.552628   14295 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:13.636814   14295 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:13.636821   14295 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:13.637066   14295 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2
	I0314 11:21:13.649223   14295 main.go:141] libmachine: STDOUT: 
	I0314 11:21:13.649243   14295 main.go:141] libmachine: STDERR: 
	I0314 11:21:13.649293   14295 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2 +20000M
	I0314 11:21:13.659864   14295 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:13.659882   14295 main.go:141] libmachine: STDERR: 
	I0314 11:21:13.659897   14295 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2
	I0314 11:21:13.659901   14295 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:13.659949   14295 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:d1:22:f3:e1:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/bridge-912000/disk.qcow2
	I0314 11:21:13.661656   14295 main.go:141] libmachine: STDOUT: 
	I0314 11:21:13.661674   14295 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:13.661689   14295 client.go:171] duration metric: took 265.333458ms to LocalClient.Create
	I0314 11:21:15.663876   14295 start.go:128] duration metric: took 2.292279708s to createHost
	I0314 11:21:15.663937   14295 start.go:83] releasing machines lock for "bridge-912000", held for 2.292417542s
	W0314 11:21:15.664301   14295 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:15.672880   14295 out.go:177] 
	W0314 11:21:15.679027   14295 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:21:15.679049   14295 out.go:239] * 
	* 
	W0314 11:21:15.681059   14295 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:21:15.690919   14295 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-912000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.852995792s)

                                                
                                                
-- stdout --
	* [kubenet-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-912000" primary control-plane node in "kubenet-912000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-912000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:21:18.027043   14409 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:21:18.027165   14409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:18.027169   14409 out.go:304] Setting ErrFile to fd 2...
	I0314 11:21:18.027171   14409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:18.027299   14409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:21:18.028392   14409 out.go:298] Setting JSON to false
	I0314 11:21:18.044531   14409 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8450,"bootTime":1710432028,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:21:18.044600   14409 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:21:18.050741   14409 out.go:177] * [kubenet-912000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:21:18.059588   14409 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:21:18.063689   14409 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:21:18.059635   14409 notify.go:220] Checking for updates...
	I0314 11:21:18.066679   14409 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:21:18.068033   14409 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:21:18.070639   14409 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:21:18.073670   14409 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:21:18.076976   14409 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:21:18.077050   14409 config.go:182] Loaded profile config "stopped-upgrade-157000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0314 11:21:18.077100   14409 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:21:18.080651   14409 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:21:18.087680   14409 start.go:297] selected driver: qemu2
	I0314 11:21:18.087686   14409 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:21:18.087691   14409 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:21:18.089785   14409 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:21:18.093659   14409 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:21:18.096791   14409 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:21:18.096850   14409 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0314 11:21:18.096895   14409 start.go:340] cluster config:
	{Name:kubenet-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:21:18.101358   14409 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:18.108699   14409 out.go:177] * Starting "kubenet-912000" primary control-plane node in "kubenet-912000" cluster
	I0314 11:21:18.112709   14409 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:21:18.112725   14409 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:21:18.112746   14409 cache.go:56] Caching tarball of preloaded images
	I0314 11:21:18.112800   14409 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:21:18.112806   14409 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:21:18.112883   14409 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/kubenet-912000/config.json ...
	I0314 11:21:18.112902   14409 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/kubenet-912000/config.json: {Name:mkd215eb0402fd551dbcfda86c8bf2a946f85fa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:21:18.113302   14409 start.go:360] acquireMachinesLock for kubenet-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:18.113339   14409 start.go:364] duration metric: took 29.084µs to acquireMachinesLock for "kubenet-912000"
	I0314 11:21:18.113353   14409 start.go:93] Provisioning new machine with config: &{Name:kubenet-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:18.113389   14409 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:18.121696   14409 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:21:18.138075   14409 start.go:159] libmachine.API.Create for "kubenet-912000" (driver="qemu2")
	I0314 11:21:18.138102   14409 client.go:168] LocalClient.Create starting
	I0314 11:21:18.138156   14409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:18.138187   14409 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:18.138196   14409 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:18.138243   14409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:18.138267   14409 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:18.138276   14409 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:18.138663   14409 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:18.278830   14409 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:18.386510   14409 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:18.386518   14409 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:18.386746   14409 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2
	I0314 11:21:18.399003   14409 main.go:141] libmachine: STDOUT: 
	I0314 11:21:18.399026   14409 main.go:141] libmachine: STDERR: 
	I0314 11:21:18.399078   14409 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2 +20000M
	I0314 11:21:18.409922   14409 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:18.409936   14409 main.go:141] libmachine: STDERR: 
	I0314 11:21:18.409956   14409 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2
	I0314 11:21:18.409961   14409 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:18.409993   14409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:60:3c:88:62:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2
	I0314 11:21:18.411693   14409 main.go:141] libmachine: STDOUT: 
	I0314 11:21:18.411709   14409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:18.411726   14409 client.go:171] duration metric: took 273.617959ms to LocalClient.Create
	I0314 11:21:20.414005   14409 start.go:128] duration metric: took 2.300567458s to createHost
	I0314 11:21:20.414121   14409 start.go:83] releasing machines lock for "kubenet-912000", held for 2.3007645s
	W0314 11:21:20.414173   14409 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:20.427151   14409 out.go:177] * Deleting "kubenet-912000" in qemu2 ...
	W0314 11:21:20.449924   14409 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:20.449951   14409 start.go:728] Will try again in 5 seconds ...
	I0314 11:21:25.451463   14409 start.go:360] acquireMachinesLock for kubenet-912000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:25.451876   14409 start.go:364] duration metric: took 331.625µs to acquireMachinesLock for "kubenet-912000"
	I0314 11:21:25.451979   14409 start.go:93] Provisioning new machine with config: &{Name:kubenet-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-912000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:25.452178   14409 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:25.459380   14409 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 11:21:25.505398   14409 start.go:159] libmachine.API.Create for "kubenet-912000" (driver="qemu2")
	I0314 11:21:25.505451   14409 client.go:168] LocalClient.Create starting
	I0314 11:21:25.505573   14409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:25.505642   14409 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:25.505660   14409 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:25.505721   14409 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:25.505761   14409 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:25.505802   14409 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:25.506321   14409 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:25.656097   14409 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:25.772303   14409 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:25.772313   14409 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:25.772540   14409 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2
	I0314 11:21:25.785027   14409 main.go:141] libmachine: STDOUT: 
	I0314 11:21:25.785051   14409 main.go:141] libmachine: STDERR: 
	I0314 11:21:25.785110   14409 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2 +20000M
	I0314 11:21:25.796162   14409 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:25.796179   14409 main.go:141] libmachine: STDERR: 
	I0314 11:21:25.796200   14409 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2
	I0314 11:21:25.796205   14409 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:25.796249   14409 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:d7:a8:dc:b1:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/kubenet-912000/disk.qcow2
	I0314 11:21:25.797960   14409 main.go:141] libmachine: STDOUT: 
	I0314 11:21:25.797974   14409 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:25.797987   14409 client.go:171] duration metric: took 292.528292ms to LocalClient.Create
	I0314 11:21:27.800207   14409 start.go:128] duration metric: took 2.347979917s to createHost
	I0314 11:21:27.800309   14409 start.go:83] releasing machines lock for "kubenet-912000", held for 2.348401709s
	W0314 11:21:27.800668   14409 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-912000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:27.815307   14409 out.go:177] 
	W0314 11:21:27.819346   14409 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:21:27.819371   14409 out.go:239] * 
	* 
	W0314 11:21:27.821934   14409 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:21:27.835282   14409 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-885000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-885000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.15762325s)

                                                
                                                
-- stdout --
	* [old-k8s-version-885000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-885000" primary control-plane node in "old-k8s-version-885000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-885000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:21:29.211894   14479 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:21:29.212059   14479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:29.212063   14479 out.go:304] Setting ErrFile to fd 2...
	I0314 11:21:29.212066   14479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:29.212204   14479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:21:29.213682   14479 out.go:298] Setting JSON to false
	I0314 11:21:29.230933   14479 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8461,"bootTime":1710432028,"procs":378,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:21:29.231003   14479 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:21:29.234805   14479 out.go:177] * [old-k8s-version-885000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:21:29.249847   14479 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:21:29.253777   14479 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:21:29.249850   14479 notify.go:220] Checking for updates...
	I0314 11:21:29.259717   14479 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:21:29.262738   14479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:21:29.265672   14479 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:21:29.268717   14479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:21:29.272101   14479 config.go:182] Loaded profile config "kubenet-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:21:29.272184   14479 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:21:29.272230   14479 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:21:29.275720   14479 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:21:29.282783   14479 start.go:297] selected driver: qemu2
	I0314 11:21:29.282793   14479 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:21:29.282799   14479 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:21:29.285216   14479 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:21:29.288777   14479 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:21:29.292816   14479 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:21:29.292864   14479 cni.go:84] Creating CNI manager for ""
	I0314 11:21:29.292880   14479 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0314 11:21:29.292911   14479 start.go:340] cluster config:
	{Name:old-k8s-version-885000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-885000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:21:29.296927   14479 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:29.307705   14479 out.go:177] * Starting "old-k8s-version-885000" primary control-plane node in "old-k8s-version-885000" cluster
	I0314 11:21:29.315770   14479 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 11:21:29.315791   14479 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0314 11:21:29.315799   14479 cache.go:56] Caching tarball of preloaded images
	I0314 11:21:29.315865   14479 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:21:29.315870   14479 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0314 11:21:29.315933   14479 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/old-k8s-version-885000/config.json ...
	I0314 11:21:29.315944   14479 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/old-k8s-version-885000/config.json: {Name:mkb6ede0fd735183487863eae7a7fcf2636c0bfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:21:29.316153   14479 start.go:360] acquireMachinesLock for old-k8s-version-885000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:29.316196   14479 start.go:364] duration metric: took 24.375µs to acquireMachinesLock for "old-k8s-version-885000"
	I0314 11:21:29.316208   14479 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-885000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-885000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:29.316244   14479 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:29.324736   14479 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:21:29.340306   14479 start.go:159] libmachine.API.Create for "old-k8s-version-885000" (driver="qemu2")
	I0314 11:21:29.340346   14479 client.go:168] LocalClient.Create starting
	I0314 11:21:29.340427   14479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:29.340458   14479 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:29.340467   14479 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:29.340504   14479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:29.340526   14479 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:29.340532   14479 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:29.340918   14479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:29.582348   14479 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:29.735307   14479 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:29.735316   14479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:29.738183   14479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2
	I0314 11:21:29.750442   14479 main.go:141] libmachine: STDOUT: 
	I0314 11:21:29.750466   14479 main.go:141] libmachine: STDERR: 
	I0314 11:21:29.750536   14479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2 +20000M
	I0314 11:21:29.762626   14479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:29.762646   14479 main.go:141] libmachine: STDERR: 
	I0314 11:21:29.762665   14479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2
	I0314 11:21:29.762668   14479 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:29.762695   14479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:32:e6:92:0a:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2
	I0314 11:21:29.764591   14479 main.go:141] libmachine: STDOUT: 
	I0314 11:21:29.764607   14479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:29.764625   14479 client.go:171] duration metric: took 424.271708ms to LocalClient.Create
	I0314 11:21:31.766878   14479 start.go:128] duration metric: took 2.450597625s to createHost
	I0314 11:21:31.766994   14479 start.go:83] releasing machines lock for "old-k8s-version-885000", held for 2.450778416s
	W0314 11:21:31.767040   14479 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:31.785324   14479 out.go:177] * Deleting "old-k8s-version-885000" in qemu2 ...
	W0314 11:21:31.807191   14479 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:31.807216   14479 start.go:728] Will try again in 5 seconds ...
	I0314 11:21:36.807955   14479 start.go:360] acquireMachinesLock for old-k8s-version-885000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:36.808372   14479 start.go:364] duration metric: took 311.458µs to acquireMachinesLock for "old-k8s-version-885000"
	I0314 11:21:36.808525   14479 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-885000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-885000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:36.808722   14479 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:36.816209   14479 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:21:36.863097   14479 start.go:159] libmachine.API.Create for "old-k8s-version-885000" (driver="qemu2")
	I0314 11:21:36.863150   14479 client.go:168] LocalClient.Create starting
	I0314 11:21:36.863248   14479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:36.863311   14479 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:36.863330   14479 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:36.863393   14479 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:36.863434   14479 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:36.863447   14479 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:36.863952   14479 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:37.014883   14479 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:37.271151   14479 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:37.271162   14479 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:37.271365   14479 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2
	I0314 11:21:37.284225   14479 main.go:141] libmachine: STDOUT: 
	I0314 11:21:37.284247   14479 main.go:141] libmachine: STDERR: 
	I0314 11:21:37.284302   14479 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2 +20000M
	I0314 11:21:37.295342   14479 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:37.295370   14479 main.go:141] libmachine: STDERR: 
	I0314 11:21:37.295402   14479 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2
	I0314 11:21:37.295407   14479 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:37.295444   14479 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a5:38:e8:49:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2
	I0314 11:21:37.297262   14479 main.go:141] libmachine: STDOUT: 
	I0314 11:21:37.297277   14479 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:37.297292   14479 client.go:171] duration metric: took 434.134917ms to LocalClient.Create
	I0314 11:21:39.299479   14479 start.go:128] duration metric: took 2.490700917s to createHost
	I0314 11:21:39.299531   14479 start.go:83] releasing machines lock for "old-k8s-version-885000", held for 2.491126459s
	W0314 11:21:39.299754   14479 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-885000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-885000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:39.311126   14479 out.go:177] 
	W0314 11:21:39.314273   14479 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:21:39.314335   14479 out.go:239] * 
	* 
	W0314 11:21:39.315889   14479 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:21:39.326110   14479 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-885000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000: exit status 7 (55.470459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-885000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-861000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-861000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (11.866705458s)

                                                
                                                
-- stdout --
	* [no-preload-861000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-861000" primary control-plane node in "no-preload-861000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-861000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:21:30.310017   14535 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:21:30.310126   14535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:30.310129   14535 out.go:304] Setting ErrFile to fd 2...
	I0314 11:21:30.310131   14535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:30.310251   14535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:21:30.311293   14535 out.go:298] Setting JSON to false
	I0314 11:21:30.326841   14535 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8462,"bootTime":1710432028,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:21:30.326914   14535 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:21:30.333750   14535 out.go:177] * [no-preload-861000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:21:30.341733   14535 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:21:30.345866   14535 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:21:30.341773   14535 notify.go:220] Checking for updates...
	I0314 11:21:30.351751   14535 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:21:30.354751   14535 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:21:30.357832   14535 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:21:30.360772   14535 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:21:30.364053   14535 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:21:30.364130   14535 config.go:182] Loaded profile config "old-k8s-version-885000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0314 11:21:30.364178   14535 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:21:30.368758   14535 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:21:30.375722   14535 start.go:297] selected driver: qemu2
	I0314 11:21:30.375727   14535 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:21:30.375732   14535 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:21:30.377916   14535 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:21:30.380746   14535 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:21:30.383828   14535 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:21:30.383855   14535 cni.go:84] Creating CNI manager for ""
	I0314 11:21:30.383863   14535 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:21:30.383867   14535 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:21:30.383901   14535 start.go:340] cluster config:
	{Name:no-preload-861000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-861000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:21:30.388381   14535 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:30.395773   14535 out.go:177] * Starting "no-preload-861000" primary control-plane node in "no-preload-861000" cluster
	I0314 11:21:30.399788   14535 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 11:21:30.399879   14535 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/no-preload-861000/config.json ...
	I0314 11:21:30.399898   14535 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/no-preload-861000/config.json: {Name:mk42be7ff942517ee246c491f49d6ec6b220790a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:21:30.400092   14535 cache.go:107] acquiring lock: {Name:mkb89063aca4cf7893fa98179e72545e309731ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:30.400094   14535 cache.go:107] acquiring lock: {Name:mkd31df80b1b2c1282e0224438f6049f1125b826 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:30.400104   14535 cache.go:107] acquiring lock: {Name:mkb792117bfe6f99234ca11d07b2492e47a3686d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:30.400129   14535 cache.go:107] acquiring lock: {Name:mkf1663e29de85cbaf04fbf85ec2fd9b63d1ebab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:30.400132   14535 cache.go:107] acquiring lock: {Name:mkdb47b67a73577044733ed6977647d4fce50c6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:30.400179   14535 cache.go:107] acquiring lock: {Name:mkbd99275d76629b0e4bad59f8583556c50a36b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:30.400191   14535 cache.go:107] acquiring lock: {Name:mke36204c747b403d3cdb9bf592baa257c9d1c9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:30.400360   14535 start.go:360] acquireMachinesLock for no-preload-861000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:30.400541   14535 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 11:21:30.400573   14535 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 11:21:30.400586   14535 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 11:21:30.400594   14535 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 11:21:30.400153   14535 cache.go:107] acquiring lock: {Name:mkb5d8b64feb3785748df6a1b45e61ff7bce7f59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:30.400621   14535 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 11:21:30.400640   14535 cache.go:115] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0314 11:21:30.400650   14535 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 729.25µs
	I0314 11:21:30.400660   14535 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0314 11:21:30.400664   14535 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 11:21:30.400625   14535 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 11:21:30.406443   14535 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 11:21:30.406485   14535 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 11:21:30.406521   14535 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 11:21:30.406570   14535 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 11:21:30.406663   14535 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 11:21:30.406859   14535 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 11:21:30.406869   14535 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 11:21:31.767175   14535 start.go:364] duration metric: took 1.36678425s to acquireMachinesLock for "no-preload-861000"
	I0314 11:21:31.767334   14535 start.go:93] Provisioning new machine with config: &{Name:no-preload-861000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-861000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:31.767588   14535 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:31.776368   14535 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:21:31.822608   14535 start.go:159] libmachine.API.Create for "no-preload-861000" (driver="qemu2")
	I0314 11:21:31.822654   14535 client.go:168] LocalClient.Create starting
	I0314 11:21:31.822781   14535 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:31.822841   14535 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:31.822863   14535 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:31.822931   14535 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:31.822978   14535 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:31.822996   14535 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:31.823781   14535 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:31.975625   14535 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:32.100527   14535 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:32.100537   14535 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:32.100764   14535 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2
	I0314 11:21:32.113376   14535 main.go:141] libmachine: STDOUT: 
	I0314 11:21:32.113396   14535 main.go:141] libmachine: STDERR: 
	I0314 11:21:32.113446   14535 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2 +20000M
	I0314 11:21:32.124011   14535 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:32.124032   14535 main.go:141] libmachine: STDERR: 
	I0314 11:21:32.124049   14535 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2
	I0314 11:21:32.124054   14535 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:32.124094   14535 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:68:82:99:d8:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2
	I0314 11:21:32.125812   14535 main.go:141] libmachine: STDOUT: 
	I0314 11:21:32.125828   14535 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:32.125849   14535 client.go:171] duration metric: took 303.186333ms to LocalClient.Create
	I0314 11:21:32.373187   14535 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 11:21:32.441800   14535 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 11:21:32.479876   14535 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 11:21:32.506329   14535 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 11:21:32.506387   14535 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0314 11:21:32.507660   14535 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 11:21:32.528136   14535 cache.go:162] opening:  /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0314 11:21:32.628131   14535 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0314 11:21:32.628157   14535 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.228017583s
	I0314 11:21:32.628172   14535 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0314 11:21:34.126110   14535 start.go:128] duration metric: took 2.358464292s to createHost
	I0314 11:21:34.126189   14535 start.go:83] releasing machines lock for "no-preload-861000", held for 2.358920625s
	W0314 11:21:34.126270   14535 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:34.134346   14535 out.go:177] * Deleting "no-preload-861000" in qemu2 ...
	W0314 11:21:34.160480   14535 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:34.160515   14535 start.go:728] Will try again in 5 seconds ...
	I0314 11:21:35.432313   14535 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0314 11:21:35.432404   14535 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 5.032378417s
	I0314 11:21:35.432437   14535 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0314 11:21:35.499028   14535 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0314 11:21:35.499073   14535 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 5.099042333s
	I0314 11:21:35.499122   14535 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0314 11:21:36.120415   14535 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0314 11:21:36.120471   14535 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 5.720528584s
	I0314 11:21:36.120494   14535 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0314 11:21:36.130531   14535 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0314 11:21:36.130566   14535 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 5.730614458s
	I0314 11:21:36.130585   14535 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0314 11:21:37.147703   14535 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0314 11:21:37.147717   14535 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 6.747687333s
	I0314 11:21:37.147725   14535 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0314 11:21:39.160899   14535 start.go:360] acquireMachinesLock for no-preload-861000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:39.299616   14535 start.go:364] duration metric: took 138.622917ms to acquireMachinesLock for "no-preload-861000"
	I0314 11:21:39.299736   14535 start.go:93] Provisioning new machine with config: &{Name:no-preload-861000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-861000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:39.299892   14535 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:39.308066   14535 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:21:39.354880   14535 start.go:159] libmachine.API.Create for "no-preload-861000" (driver="qemu2")
	I0314 11:21:39.354937   14535 client.go:168] LocalClient.Create starting
	I0314 11:21:39.355076   14535 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:39.355123   14535 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:39.355143   14535 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:39.355227   14535 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:39.355261   14535 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:39.355274   14535 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:39.355834   14535 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:39.857458   14535 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:40.077474   14535 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:40.077482   14535 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:40.077666   14535 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2
	I0314 11:21:40.090174   14535 main.go:141] libmachine: STDOUT: 
	I0314 11:21:40.090208   14535 main.go:141] libmachine: STDERR: 
	I0314 11:21:40.090262   14535 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2 +20000M
	I0314 11:21:40.101039   14535 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:40.101057   14535 main.go:141] libmachine: STDERR: 
	I0314 11:21:40.101072   14535 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2
	I0314 11:21:40.101077   14535 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:40.101108   14535 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:43:69:bc:5b:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2
	I0314 11:21:40.102941   14535 main.go:141] libmachine: STDOUT: 
	I0314 11:21:40.102957   14535 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:40.102969   14535 client.go:171] duration metric: took 748.023833ms to LocalClient.Create
	I0314 11:21:41.995923   14535 cache.go:157] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0314 11:21:41.995941   14535 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 11.595877917s
	I0314 11:21:41.995949   14535 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0314 11:21:41.995973   14535 cache.go:87] Successfully saved all images to host disk.
	I0314 11:21:42.103522   14535 start.go:128] duration metric: took 2.803607958s to createHost
	I0314 11:21:42.103534   14535 start.go:83] releasing machines lock for "no-preload-861000", held for 2.803887167s
	W0314 11:21:42.103587   14535 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-861000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-861000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:42.111433   14535 out.go:177] 
	W0314 11:21:42.118567   14535 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:21:42.118575   14535 out.go:239] * 
	* 
	W0314 11:21:42.119108   14535 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:21:42.133519   14535 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-861000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000: exit status 7 (39.039416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (11.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-885000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-885000 create -f testdata/busybox.yaml: exit status 1 (30.369167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-885000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-885000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000: exit status 7 (35.614459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-885000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000: exit status 7 (33.51875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-885000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-885000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-885000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-885000 describe deploy/metrics-server -n kube-system: exit status 1 (31.42275ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-885000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-885000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000: exit status 7 (32.402667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-885000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-885000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-885000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.2280665s)

                                                
                                                
-- stdout --
	* [old-k8s-version-885000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-885000" primary control-plane node in "old-k8s-version-885000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-885000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-885000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:21:42.066437   14609 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:21:42.066566   14609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:42.066569   14609 out.go:304] Setting ErrFile to fd 2...
	I0314 11:21:42.066571   14609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:42.066684   14609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:21:42.067651   14609 out.go:298] Setting JSON to false
	I0314 11:21:42.083355   14609 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8474,"bootTime":1710432028,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:21:42.083412   14609 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:21:42.088499   14609 out.go:177] * [old-k8s-version-885000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:21:42.096499   14609 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:21:42.100419   14609 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:21:42.096590   14609 notify.go:220] Checking for updates...
	I0314 11:21:42.107496   14609 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:21:42.118560   14609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:21:42.133521   14609 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:21:42.141466   14609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:21:42.147864   14609 config.go:182] Loaded profile config "old-k8s-version-885000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0314 11:21:42.152472   14609 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 11:21:42.155607   14609 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:21:42.158400   14609 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:21:42.166510   14609 start.go:297] selected driver: qemu2
	I0314 11:21:42.166521   14609 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-885000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-885000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:21:42.166601   14609 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:21:42.169206   14609 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:21:42.169235   14609 cni.go:84] Creating CNI manager for ""
	I0314 11:21:42.169243   14609 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0314 11:21:42.169272   14609 start.go:340] cluster config:
	{Name:old-k8s-version-885000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-885000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:21:42.174559   14609 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:42.181473   14609 out.go:177] * Starting "old-k8s-version-885000" primary control-plane node in "old-k8s-version-885000" cluster
	I0314 11:21:42.187465   14609 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 11:21:42.187489   14609 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0314 11:21:42.187507   14609 cache.go:56] Caching tarball of preloaded images
	I0314 11:21:42.187609   14609 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:21:42.187616   14609 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0314 11:21:42.187685   14609 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/old-k8s-version-885000/config.json ...
	I0314 11:21:42.187997   14609 start.go:360] acquireMachinesLock for old-k8s-version-885000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:42.188034   14609 start.go:364] duration metric: took 28.583µs to acquireMachinesLock for "old-k8s-version-885000"
	I0314 11:21:42.188043   14609 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:21:42.188048   14609 fix.go:54] fixHost starting: 
	I0314 11:21:42.188162   14609 fix.go:112] recreateIfNeeded on old-k8s-version-885000: state=Stopped err=<nil>
	W0314 11:21:42.188171   14609 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:21:42.192532   14609 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-885000" ...
	I0314 11:21:42.200484   14609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a5:38:e8:49:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2
	I0314 11:21:42.202531   14609 main.go:141] libmachine: STDOUT: 
	I0314 11:21:42.202549   14609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:42.202588   14609 fix.go:56] duration metric: took 14.539166ms for fixHost
	I0314 11:21:42.202594   14609 start.go:83] releasing machines lock for "old-k8s-version-885000", held for 14.55625ms
	W0314 11:21:42.202600   14609 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:21:42.202638   14609 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:42.202643   14609 start.go:728] Will try again in 5 seconds ...
	I0314 11:21:47.204823   14609 start.go:360] acquireMachinesLock for old-k8s-version-885000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:47.205144   14609 start.go:364] duration metric: took 238.166µs to acquireMachinesLock for "old-k8s-version-885000"
	I0314 11:21:47.205245   14609 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:21:47.205267   14609 fix.go:54] fixHost starting: 
	I0314 11:21:47.205976   14609 fix.go:112] recreateIfNeeded on old-k8s-version-885000: state=Stopped err=<nil>
	W0314 11:21:47.206002   14609 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:21:47.211533   14609 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-885000" ...
	I0314 11:21:47.218652   14609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:a5:38:e8:49:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/old-k8s-version-885000/disk.qcow2
	I0314 11:21:47.228203   14609 main.go:141] libmachine: STDOUT: 
	I0314 11:21:47.228298   14609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:47.228388   14609 fix.go:56] duration metric: took 23.120417ms for fixHost
	I0314 11:21:47.228418   14609 start.go:83] releasing machines lock for "old-k8s-version-885000", held for 23.253458ms
	W0314 11:21:47.228647   14609 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-885000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-885000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:47.236488   14609 out.go:177] 
	W0314 11:21:47.240479   14609 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:21:47.240525   14609 out.go:239] * 
	* 
	W0314 11:21:47.243246   14609 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:21:47.250431   14609 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-885000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000: exit status 7 (70.88025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-885000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-861000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-861000 create -f testdata/busybox.yaml: exit status 1 (27.295208ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-861000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-861000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000: exit status 7 (31.487084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-861000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000: exit status 7 (31.776917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-861000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-861000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-861000 describe deploy/metrics-server -n kube-system: exit status 1 (27.004917ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-861000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-861000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000: exit status 7 (31.841542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-861000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-861000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.188590958s)

                                                
                                                
-- stdout --
	* [no-preload-861000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-861000" primary control-plane node in "no-preload-861000" cluster
	* Restarting existing qemu2 VM for "no-preload-861000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-861000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:21:46.245379   14652 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:21:46.245491   14652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:46.245494   14652 out.go:304] Setting ErrFile to fd 2...
	I0314 11:21:46.245496   14652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:46.245638   14652 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:21:46.246624   14652 out.go:298] Setting JSON to false
	I0314 11:21:46.262419   14652 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8478,"bootTime":1710432028,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:21:46.262482   14652 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:21:46.266955   14652 out.go:177] * [no-preload-861000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:21:46.273838   14652 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:21:46.273910   14652 notify.go:220] Checking for updates...
	I0314 11:21:46.277891   14652 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:21:46.280869   14652 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:21:46.283817   14652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:21:46.286816   14652 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:21:46.288187   14652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:21:46.291124   14652 config.go:182] Loaded profile config "no-preload-861000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0314 11:21:46.291405   14652 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:21:46.295801   14652 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:21:46.300789   14652 start.go:297] selected driver: qemu2
	I0314 11:21:46.300794   14652 start.go:901] validating driver "qemu2" against &{Name:no-preload-861000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-861000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:21:46.300849   14652 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:21:46.303009   14652 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:21:46.303047   14652 cni.go:84] Creating CNI manager for ""
	I0314 11:21:46.303054   14652 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:21:46.303076   14652 start.go:340] cluster config:
	{Name:no-preload-861000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-861000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:21:46.307141   14652 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:46.313806   14652 out.go:177] * Starting "no-preload-861000" primary control-plane node in "no-preload-861000" cluster
	I0314 11:21:46.317821   14652 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 11:21:46.317917   14652 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/no-preload-861000/config.json ...
	I0314 11:21:46.317941   14652 cache.go:107] acquiring lock: {Name:mkb5d8b64feb3785748df6a1b45e61ff7bce7f59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:46.317952   14652 cache.go:107] acquiring lock: {Name:mkdb47b67a73577044733ed6977647d4fce50c6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:46.318012   14652 cache.go:115] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0314 11:21:46.318020   14652 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.667µs
	I0314 11:21:46.318027   14652 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0314 11:21:46.318040   14652 cache.go:107] acquiring lock: {Name:mkb89063aca4cf7893fa98179e72545e309731ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:46.318052   14652 cache.go:115] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0314 11:21:46.318057   14652 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 124.791µs
	I0314 11:21:46.318066   14652 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0314 11:21:46.318079   14652 cache.go:115] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I0314 11:21:46.318083   14652 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 43.334µs
	I0314 11:21:46.318086   14652 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0314 11:21:46.318080   14652 cache.go:107] acquiring lock: {Name:mkbd99275d76629b0e4bad59f8583556c50a36b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:46.318092   14652 cache.go:107] acquiring lock: {Name:mkb792117bfe6f99234ca11d07b2492e47a3686d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:46.318076   14652 cache.go:107] acquiring lock: {Name:mke36204c747b403d3cdb9bf592baa257c9d1c9f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:46.318119   14652 cache.go:115] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0314 11:21:46.318127   14652 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 48.208µs
	I0314 11:21:46.318131   14652 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0314 11:21:46.318131   14652 cache.go:115] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0314 11:21:46.318136   14652 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 44.125µs
	I0314 11:21:46.318139   14652 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0314 11:21:46.318177   14652 cache.go:107] acquiring lock: {Name:mkf1663e29de85cbaf04fbf85ec2fd9b63d1ebab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:46.318177   14652 cache.go:107] acquiring lock: {Name:mkd31df80b1b2c1282e0224438f6049f1125b826 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:46.318248   14652 cache.go:115] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0314 11:21:46.318258   14652 cache.go:115] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0314 11:21:46.318270   14652 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 170.833µs
	I0314 11:21:46.318278   14652 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0314 11:21:46.318274   14652 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 225.25µs
	I0314 11:21:46.318283   14652 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0314 11:21:46.318249   14652 cache.go:115] /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0314 11:21:46.318288   14652 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 151.25µs
	I0314 11:21:46.318291   14652 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0314 11:21:46.318293   14652 cache.go:87] Successfully saved all images to host disk.
	I0314 11:21:46.318344   14652 start.go:360] acquireMachinesLock for no-preload-861000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:46.318370   14652 start.go:364] duration metric: took 20.375µs to acquireMachinesLock for "no-preload-861000"
	I0314 11:21:46.318381   14652 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:21:46.318386   14652 fix.go:54] fixHost starting: 
	I0314 11:21:46.318502   14652 fix.go:112] recreateIfNeeded on no-preload-861000: state=Stopped err=<nil>
	W0314 11:21:46.318513   14652 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:21:46.326825   14652 out.go:177] * Restarting existing qemu2 VM for "no-preload-861000" ...
	I0314 11:21:46.330850   14652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:43:69:bc:5b:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2
	I0314 11:21:46.332973   14652 main.go:141] libmachine: STDOUT: 
	I0314 11:21:46.332993   14652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:46.333021   14652 fix.go:56] duration metric: took 14.635041ms for fixHost
	I0314 11:21:46.333025   14652 start.go:83] releasing machines lock for "no-preload-861000", held for 14.650959ms
	W0314 11:21:46.333034   14652 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:21:46.333065   14652 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:46.333070   14652 start.go:728] Will try again in 5 seconds ...
	I0314 11:21:51.334697   14652 start.go:360] acquireMachinesLock for no-preload-861000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:51.335158   14652 start.go:364] duration metric: took 357.916µs to acquireMachinesLock for "no-preload-861000"
	I0314 11:21:51.335285   14652 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:21:51.335309   14652 fix.go:54] fixHost starting: 
	I0314 11:21:51.336038   14652 fix.go:112] recreateIfNeeded on no-preload-861000: state=Stopped err=<nil>
	W0314 11:21:51.336065   14652 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:21:51.354515   14652 out.go:177] * Restarting existing qemu2 VM for "no-preload-861000" ...
	I0314 11:21:51.358591   14652 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:43:69:bc:5b:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/no-preload-861000/disk.qcow2
	I0314 11:21:51.369070   14652 main.go:141] libmachine: STDOUT: 
	I0314 11:21:51.369143   14652 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:51.369243   14652 fix.go:56] duration metric: took 33.937458ms for fixHost
	I0314 11:21:51.369261   14652 start.go:83] releasing machines lock for "no-preload-861000", held for 34.079125ms
	W0314 11:21:51.369453   14652 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-861000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-861000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:51.376025   14652 out.go:177] 
	W0314 11:21:51.379569   14652 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:21:51.379593   14652 out.go:239] * 
	* 
	W0314 11:21:51.382498   14652 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:21:51.389451   14652 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-861000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000: exit status 7 (68.710584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-885000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000: exit status 7 (33.471541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-885000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-885000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-885000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-885000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.401792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-885000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-885000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000: exit status 7 (31.586708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-885000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-885000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000: exit status 7 (31.727334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-885000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-885000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-885000 --alsologtostderr -v=1: exit status 83 (43.504459ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-885000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-885000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:21:47.535905   14671 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:21:47.536358   14671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:47.536361   14671 out.go:304] Setting ErrFile to fd 2...
	I0314 11:21:47.536364   14671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:47.536478   14671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:21:47.536665   14671 out.go:298] Setting JSON to false
	I0314 11:21:47.536671   14671 mustload.go:65] Loading cluster: old-k8s-version-885000
	I0314 11:21:47.536856   14671 config.go:182] Loaded profile config "old-k8s-version-885000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0314 11:21:47.541177   14671 out.go:177] * The control-plane node old-k8s-version-885000 host is not running: state=Stopped
	I0314 11:21:47.545002   14671 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-885000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-885000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000: exit status 7 (31.699417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-885000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000: exit status 7 (30.738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-885000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-178000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-178000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.783599333s)

                                                
                                                
-- stdout --
	* [embed-certs-178000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-178000" primary control-plane node in "embed-certs-178000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-178000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:21:48.012113   14694 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:21:48.012256   14694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:48.012259   14694 out.go:304] Setting ErrFile to fd 2...
	I0314 11:21:48.012262   14694 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:48.012390   14694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:21:48.013471   14694 out.go:298] Setting JSON to false
	I0314 11:21:48.029400   14694 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8480,"bootTime":1710432028,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:21:48.029464   14694 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:21:48.034135   14694 out.go:177] * [embed-certs-178000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:21:48.040064   14694 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:21:48.044034   14694 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:21:48.040099   14694 notify.go:220] Checking for updates...
	I0314 11:21:48.050017   14694 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:21:48.053082   14694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:21:48.056017   14694 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:21:48.059014   14694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:21:48.062409   14694 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:21:48.062469   14694 config.go:182] Loaded profile config "no-preload-861000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0314 11:21:48.062515   14694 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:21:48.065914   14694 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:21:48.073001   14694 start.go:297] selected driver: qemu2
	I0314 11:21:48.073007   14694 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:21:48.073014   14694 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:21:48.075333   14694 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:21:48.076824   14694 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:21:48.080122   14694 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:21:48.080153   14694 cni.go:84] Creating CNI manager for ""
	I0314 11:21:48.080160   14694 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:21:48.080168   14694 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:21:48.080196   14694 start.go:340] cluster config:
	{Name:embed-certs-178000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:21:48.084701   14694 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:48.091973   14694 out.go:177] * Starting "embed-certs-178000" primary control-plane node in "embed-certs-178000" cluster
	I0314 11:21:48.096016   14694 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:21:48.096032   14694 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:21:48.096047   14694 cache.go:56] Caching tarball of preloaded images
	I0314 11:21:48.096105   14694 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:21:48.096111   14694 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:21:48.096182   14694 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/embed-certs-178000/config.json ...
	I0314 11:21:48.096193   14694 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/embed-certs-178000/config.json: {Name:mk1982780a9f19efabf27e7a17672bc3837a6b1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:21:48.096431   14694 start.go:360] acquireMachinesLock for embed-certs-178000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:48.096468   14694 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "embed-certs-178000"
	I0314 11:21:48.096482   14694 start.go:93] Provisioning new machine with config: &{Name:embed-certs-178000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:48.096524   14694 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:48.100952   14694 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:21:48.118266   14694 start.go:159] libmachine.API.Create for "embed-certs-178000" (driver="qemu2")
	I0314 11:21:48.118292   14694 client.go:168] LocalClient.Create starting
	I0314 11:21:48.118346   14694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:48.118375   14694 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:48.118384   14694 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:48.118432   14694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:48.118454   14694 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:48.118461   14694 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:48.118939   14694 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:48.259799   14694 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:48.322378   14694 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:48.322384   14694 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:48.322570   14694 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2
	I0314 11:21:48.334358   14694 main.go:141] libmachine: STDOUT: 
	I0314 11:21:48.334379   14694 main.go:141] libmachine: STDERR: 
	I0314 11:21:48.334432   14694 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2 +20000M
	I0314 11:21:48.344843   14694 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:48.344861   14694 main.go:141] libmachine: STDERR: 
	I0314 11:21:48.344876   14694 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2
	I0314 11:21:48.344883   14694 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:48.344921   14694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:64:25:6e:ff:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2
	I0314 11:21:48.346685   14694 main.go:141] libmachine: STDOUT: 
	I0314 11:21:48.346703   14694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:48.346721   14694 client.go:171] duration metric: took 228.423292ms to LocalClient.Create
	I0314 11:21:50.348914   14694 start.go:128] duration metric: took 2.252358958s to createHost
	I0314 11:21:50.348972   14694 start.go:83] releasing machines lock for "embed-certs-178000", held for 2.252486375s
	W0314 11:21:50.349042   14694 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:50.360221   14694 out.go:177] * Deleting "embed-certs-178000" in qemu2 ...
	W0314 11:21:50.393774   14694 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:50.393802   14694 start.go:728] Will try again in 5 seconds ...
	I0314 11:21:55.396062   14694 start.go:360] acquireMachinesLock for embed-certs-178000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:55.396496   14694 start.go:364] duration metric: took 321.75µs to acquireMachinesLock for "embed-certs-178000"
	I0314 11:21:55.396629   14694 start.go:93] Provisioning new machine with config: &{Name:embed-certs-178000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:55.396915   14694 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:55.406553   14694 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:21:55.456737   14694 start.go:159] libmachine.API.Create for "embed-certs-178000" (driver="qemu2")
	I0314 11:21:55.456783   14694 client.go:168] LocalClient.Create starting
	I0314 11:21:55.456884   14694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:55.456959   14694 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:55.456978   14694 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:55.457057   14694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:55.457099   14694 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:55.457109   14694 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:55.457656   14694 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:55.608521   14694 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:55.695842   14694 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:55.695847   14694 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:55.696026   14694 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2
	I0314 11:21:55.708294   14694 main.go:141] libmachine: STDOUT: 
	I0314 11:21:55.708313   14694 main.go:141] libmachine: STDERR: 
	I0314 11:21:55.708360   14694 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2 +20000M
	I0314 11:21:55.718712   14694 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:55.718729   14694 main.go:141] libmachine: STDERR: 
	I0314 11:21:55.718747   14694 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2
	I0314 11:21:55.718752   14694 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:55.718782   14694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:0c:e5:2f:e4:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2
	I0314 11:21:55.720390   14694 main.go:141] libmachine: STDOUT: 
	I0314 11:21:55.720403   14694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:55.720414   14694 client.go:171] duration metric: took 263.625667ms to LocalClient.Create
	I0314 11:21:57.722565   14694 start.go:128] duration metric: took 2.325616958s to createHost
	I0314 11:21:57.722607   14694 start.go:83] releasing machines lock for "embed-certs-178000", held for 2.326071375s
	W0314 11:21:57.722941   14694 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-178000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-178000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:57.732362   14694 out.go:177] 
	W0314 11:21:57.737554   14694 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:21:57.737580   14694 out.go:239] * 
	* 
	W0314 11:21:57.740066   14694 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:21:57.750438   14694 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-178000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000: exit status 7 (66.492ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-861000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000: exit status 7 (34.128792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-861000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-861000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-861000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.568666ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-861000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-861000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000: exit status 7 (31.61575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-861000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000: exit status 7 (31.469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-861000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-861000 --alsologtostderr -v=1: exit status 83 (41.829541ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-861000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-861000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:21:51.668974   14716 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:21:51.669124   14716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:51.669127   14716 out.go:304] Setting ErrFile to fd 2...
	I0314 11:21:51.669129   14716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:51.669258   14716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:21:51.669467   14716 out.go:298] Setting JSON to false
	I0314 11:21:51.669473   14716 mustload.go:65] Loading cluster: no-preload-861000
	I0314 11:21:51.669656   14716 config.go:182] Loaded profile config "no-preload-861000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0314 11:21:51.673517   14716 out.go:177] * The control-plane node no-preload-861000 host is not running: state=Stopped
	I0314 11:21:51.677518   14716 out.go:177]   To start a cluster, run: "minikube start -p no-preload-861000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-861000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000: exit status 7 (31.442209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-861000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000: exit status 7 (30.695625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-861000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-610000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-610000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (10.171518875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-610000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-610000" primary control-plane node in "default-k8s-diff-port-610000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-610000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:21:52.371009   14751 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:21:52.371153   14751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:52.371156   14751 out.go:304] Setting ErrFile to fd 2...
	I0314 11:21:52.371158   14751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:21:52.371276   14751 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:21:52.372337   14751 out.go:298] Setting JSON to false
	I0314 11:21:52.388352   14751 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8484,"bootTime":1710432028,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:21:52.388419   14751 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:21:52.393498   14751 out.go:177] * [default-k8s-diff-port-610000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:21:52.400425   14751 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:21:52.403483   14751 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:21:52.400466   14751 notify.go:220] Checking for updates...
	I0314 11:21:52.409490   14751 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:21:52.412524   14751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:21:52.415543   14751 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:21:52.416917   14751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:21:52.419914   14751 config.go:182] Loaded profile config "embed-certs-178000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:21:52.419981   14751 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:21:52.420029   14751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:21:52.424573   14751 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:21:52.430482   14751 start.go:297] selected driver: qemu2
	I0314 11:21:52.430487   14751 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:21:52.430492   14751 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:21:52.432771   14751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 11:21:52.435474   14751 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:21:52.438618   14751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:21:52.438648   14751 cni.go:84] Creating CNI manager for ""
	I0314 11:21:52.438655   14751 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:21:52.438659   14751 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:21:52.438689   14751 start.go:340] cluster config:
	{Name:default-k8s-diff-port-610000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-610000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:21:52.443057   14751 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:21:52.450545   14751 out.go:177] * Starting "default-k8s-diff-port-610000" primary control-plane node in "default-k8s-diff-port-610000" cluster
	I0314 11:21:52.454527   14751 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:21:52.454544   14751 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:21:52.454563   14751 cache.go:56] Caching tarball of preloaded images
	I0314 11:21:52.454628   14751 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:21:52.454634   14751 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:21:52.454707   14751 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/default-k8s-diff-port-610000/config.json ...
	I0314 11:21:52.454719   14751 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/default-k8s-diff-port-610000/config.json: {Name:mk0233a89ef3f9d6eb2eeff7041e1135ad5c3eb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:21:52.454934   14751 start.go:360] acquireMachinesLock for default-k8s-diff-port-610000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:52.454971   14751 start.go:364] duration metric: took 27.5µs to acquireMachinesLock for "default-k8s-diff-port-610000"
	I0314 11:21:52.454985   14751 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-610000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-610000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:52.455015   14751 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:52.462483   14751 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:21:52.480020   14751 start.go:159] libmachine.API.Create for "default-k8s-diff-port-610000" (driver="qemu2")
	I0314 11:21:52.480046   14751 client.go:168] LocalClient.Create starting
	I0314 11:21:52.480100   14751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:52.480131   14751 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:52.480141   14751 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:52.480190   14751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:52.480211   14751 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:52.480218   14751 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:52.480554   14751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:21:52.620584   14751 main.go:141] libmachine: Creating SSH key...
	I0314 11:21:52.661134   14751 main.go:141] libmachine: Creating Disk image...
	I0314 11:21:52.661140   14751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:21:52.661328   14751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2
	I0314 11:21:52.673407   14751 main.go:141] libmachine: STDOUT: 
	I0314 11:21:52.673431   14751 main.go:141] libmachine: STDERR: 
	I0314 11:21:52.673499   14751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2 +20000M
	I0314 11:21:52.683976   14751 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:21:52.683992   14751 main.go:141] libmachine: STDERR: 
	I0314 11:21:52.684012   14751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2
	I0314 11:21:52.684016   14751 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:21:52.684042   14751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:d6:da:94:ab:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2
	I0314 11:21:52.685624   14751 main.go:141] libmachine: STDOUT: 
	I0314 11:21:52.685640   14751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:21:52.685658   14751 client.go:171] duration metric: took 205.605958ms to LocalClient.Create
	I0314 11:21:54.687881   14751 start.go:128] duration metric: took 2.23282775s to createHost
	I0314 11:21:54.687976   14751 start.go:83] releasing machines lock for "default-k8s-diff-port-610000", held for 2.232986916s
	W0314 11:21:54.688087   14751 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:54.703051   14751 out.go:177] * Deleting "default-k8s-diff-port-610000" in qemu2 ...
	W0314 11:21:54.732277   14751 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:21:54.732308   14751 start.go:728] Will try again in 5 seconds ...
	I0314 11:21:59.734600   14751 start.go:360] acquireMachinesLock for default-k8s-diff-port-610000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:21:59.734973   14751 start.go:364] duration metric: took 270.083µs to acquireMachinesLock for "default-k8s-diff-port-610000"
	I0314 11:21:59.735107   14751 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-610000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-610000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:21:59.735459   14751 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:21:59.745129   14751 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:21:59.794269   14751 start.go:159] libmachine.API.Create for "default-k8s-diff-port-610000" (driver="qemu2")
	I0314 11:21:59.794334   14751 client.go:168] LocalClient.Create starting
	I0314 11:21:59.794459   14751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:21:59.794505   14751 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:59.794523   14751 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:59.794589   14751 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:21:59.794616   14751 main.go:141] libmachine: Decoding PEM data...
	I0314 11:21:59.794628   14751 main.go:141] libmachine: Parsing certificate...
	I0314 11:21:59.795329   14751 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:22:00.242062   14751 main.go:141] libmachine: Creating SSH key...
	I0314 11:22:00.441708   14751 main.go:141] libmachine: Creating Disk image...
	I0314 11:22:00.441721   14751 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:22:00.441890   14751 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2
	I0314 11:22:00.454279   14751 main.go:141] libmachine: STDOUT: 
	I0314 11:22:00.454303   14751 main.go:141] libmachine: STDERR: 
	I0314 11:22:00.454353   14751 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2 +20000M
	I0314 11:22:00.464932   14751 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:22:00.464950   14751 main.go:141] libmachine: STDERR: 
	I0314 11:22:00.464964   14751 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2
	I0314 11:22:00.464970   14751 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:22:00.465009   14751 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:8b:a3:82:54:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2
	I0314 11:22:00.466732   14751 main.go:141] libmachine: STDOUT: 
	I0314 11:22:00.466750   14751 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:22:00.466765   14751 client.go:171] duration metric: took 672.424083ms to LocalClient.Create
	I0314 11:22:02.468915   14751 start.go:128] duration metric: took 2.733424791s to createHost
	I0314 11:22:02.468949   14751 start.go:83] releasing machines lock for "default-k8s-diff-port-610000", held for 2.733945s
	W0314 11:22:02.469192   14751 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-610000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-610000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:22:02.477516   14751 out.go:177] 
	W0314 11:22:02.489626   14751 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:22:02.489677   14751 out.go:239] * 
	* 
	W0314 11:22:02.492541   14751 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:22:02.497541   14751 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-610000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000: exit status 7 (70.01725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-610000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-178000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-178000 create -f testdata/busybox.yaml: exit status 1 (28.769667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-178000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000: exit status 7 (30.961667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-178000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000: exit status 7 (31.181708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-178000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-178000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-178000 describe deploy/metrics-server -n kube-system: exit status 1 (27.071708ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-178000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000: exit status 7 (31.225125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-178000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-178000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (7.2877475s)

                                                
                                                
-- stdout --
	* [embed-certs-178000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-178000" primary control-plane node in "embed-certs-178000" cluster
	* Restarting existing qemu2 VM for "embed-certs-178000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-178000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:22:00.301474   14792 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:22:00.301619   14792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:00.301622   14792 out.go:304] Setting ErrFile to fd 2...
	I0314 11:22:00.301624   14792 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:00.301758   14792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:22:00.302726   14792 out.go:298] Setting JSON to false
	I0314 11:22:00.319114   14792 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8492,"bootTime":1710432028,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:22:00.319177   14792 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:22:00.323140   14792 out.go:177] * [embed-certs-178000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:22:00.331116   14792 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:22:00.334941   14792 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:22:00.331232   14792 notify.go:220] Checking for updates...
	I0314 11:22:00.342107   14792 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:22:00.343581   14792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:22:00.347073   14792 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:22:00.350105   14792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:22:00.353343   14792 config.go:182] Loaded profile config "embed-certs-178000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:22:00.353576   14792 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:22:00.358086   14792 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:22:00.365099   14792 start.go:297] selected driver: qemu2
	I0314 11:22:00.365103   14792 start.go:901] validating driver "qemu2" against &{Name:embed-certs-178000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:embed-certs-178000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:22:00.365149   14792 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:22:00.367475   14792 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:22:00.367519   14792 cni.go:84] Creating CNI manager for ""
	I0314 11:22:00.367526   14792 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:22:00.367552   14792 start.go:340] cluster config:
	{Name:embed-certs-178000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-178000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:22:00.371454   14792 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:22:00.380105   14792 out.go:177] * Starting "embed-certs-178000" primary control-plane node in "embed-certs-178000" cluster
	I0314 11:22:00.384063   14792 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:22:00.384075   14792 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:22:00.384083   14792 cache.go:56] Caching tarball of preloaded images
	I0314 11:22:00.384127   14792 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:22:00.384132   14792 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:22:00.384193   14792 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/embed-certs-178000/config.json ...
	I0314 11:22:00.384500   14792 start.go:360] acquireMachinesLock for embed-certs-178000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:22:02.469104   14792 start.go:364] duration metric: took 2.084551375s to acquireMachinesLock for "embed-certs-178000"
	I0314 11:22:02.469222   14792 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:22:02.469240   14792 fix.go:54] fixHost starting: 
	I0314 11:22:02.469650   14792 fix.go:112] recreateIfNeeded on embed-certs-178000: state=Stopped err=<nil>
	W0314 11:22:02.469682   14792 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:22:02.485610   14792 out.go:177] * Restarting existing qemu2 VM for "embed-certs-178000" ...
	I0314 11:22:02.492717   14792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:0c:e5:2f:e4:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2
	I0314 11:22:02.500373   14792 main.go:141] libmachine: STDOUT: 
	I0314 11:22:02.500442   14792 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:22:02.500549   14792 fix.go:56] duration metric: took 31.301583ms for fixHost
	I0314 11:22:02.500567   14792 start.go:83] releasing machines lock for "embed-certs-178000", held for 31.437542ms
	W0314 11:22:02.500606   14792 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:22:02.500762   14792 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:22:02.500776   14792 start.go:728] Will try again in 5 seconds ...
	I0314 11:22:07.503020   14792 start.go:360] acquireMachinesLock for embed-certs-178000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:22:07.503392   14792 start.go:364] duration metric: took 270.333µs to acquireMachinesLock for "embed-certs-178000"
	I0314 11:22:07.503447   14792 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:22:07.503462   14792 fix.go:54] fixHost starting: 
	I0314 11:22:07.503989   14792 fix.go:112] recreateIfNeeded on embed-certs-178000: state=Stopped err=<nil>
	W0314 11:22:07.504012   14792 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:22:07.511447   14792 out.go:177] * Restarting existing qemu2 VM for "embed-certs-178000" ...
	I0314 11:22:07.515678   14792 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:0c:e5:2f:e4:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/embed-certs-178000/disk.qcow2
	I0314 11:22:07.525295   14792 main.go:141] libmachine: STDOUT: 
	I0314 11:22:07.525381   14792 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:22:07.525486   14792 fix.go:56] duration metric: took 22.014041ms for fixHost
	I0314 11:22:07.525518   14792 start.go:83] releasing machines lock for "embed-certs-178000", held for 22.093584ms
	W0314 11:22:07.525724   14792 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-178000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-178000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:22:07.533408   14792 out.go:177] 
	W0314 11:22:07.537498   14792 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:22:07.537535   14792 out.go:239] * 
	* 
	W0314 11:22:07.540067   14792 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:22:07.548435   14792 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-178000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000: exit status 7 (69.224792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-610000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-610000 create -f testdata/busybox.yaml: exit status 1 (29.712208ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-610000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-610000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000: exit status 7 (30.355334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-610000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000: exit status 7 (31.702791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-610000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-610000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-610000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-610000 describe deploy/metrics-server -n kube-system: exit status 1 (26.46225ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-610000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-610000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000: exit status 7 (31.416833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-610000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-610000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-610000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.170794041s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-610000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-610000" primary control-plane node in "default-k8s-diff-port-610000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-610000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-610000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:22:06.659542   14838 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:22:06.659649   14838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:06.659652   14838 out.go:304] Setting ErrFile to fd 2...
	I0314 11:22:06.659655   14838 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:06.659761   14838 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:22:06.660697   14838 out.go:298] Setting JSON to false
	I0314 11:22:06.676131   14838 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8498,"bootTime":1710432028,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:22:06.676187   14838 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:22:06.681136   14838 out.go:177] * [default-k8s-diff-port-610000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:22:06.689077   14838 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:22:06.689148   14838 notify.go:220] Checking for updates...
	I0314 11:22:06.693119   14838 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:22:06.696060   14838 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:22:06.699103   14838 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:22:06.702109   14838 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:22:06.703378   14838 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:22:06.706318   14838 config.go:182] Loaded profile config "default-k8s-diff-port-610000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:22:06.706579   14838 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:22:06.710128   14838 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:22:06.715061   14838 start.go:297] selected driver: qemu2
	I0314 11:22:06.715067   14838 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-610000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-610000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:22:06.715110   14838 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:22:06.717167   14838 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 11:22:06.717214   14838 cni.go:84] Creating CNI manager for ""
	I0314 11:22:06.717221   14838 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:22:06.717245   14838 start.go:340] cluster config:
	{Name:default-k8s-diff-port-610000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-610000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:22:06.721426   14838 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:22:06.729010   14838 out.go:177] * Starting "default-k8s-diff-port-610000" primary control-plane node in "default-k8s-diff-port-610000" cluster
	I0314 11:22:06.733142   14838 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 11:22:06.733160   14838 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 11:22:06.733174   14838 cache.go:56] Caching tarball of preloaded images
	I0314 11:22:06.733221   14838 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:22:06.733227   14838 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 11:22:06.733310   14838 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/default-k8s-diff-port-610000/config.json ...
	I0314 11:22:06.733752   14838 start.go:360] acquireMachinesLock for default-k8s-diff-port-610000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:22:06.733778   14838 start.go:364] duration metric: took 19.584µs to acquireMachinesLock for "default-k8s-diff-port-610000"
	I0314 11:22:06.733787   14838 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:22:06.733791   14838 fix.go:54] fixHost starting: 
	I0314 11:22:06.733911   14838 fix.go:112] recreateIfNeeded on default-k8s-diff-port-610000: state=Stopped err=<nil>
	W0314 11:22:06.733919   14838 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:22:06.737099   14838 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-610000" ...
	I0314 11:22:06.745202   14838 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:8b:a3:82:54:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2
	I0314 11:22:06.747283   14838 main.go:141] libmachine: STDOUT: 
	I0314 11:22:06.747315   14838 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:22:06.747346   14838 fix.go:56] duration metric: took 13.554459ms for fixHost
	I0314 11:22:06.747351   14838 start.go:83] releasing machines lock for "default-k8s-diff-port-610000", held for 13.569042ms
	W0314 11:22:06.747358   14838 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:22:06.747387   14838 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:22:06.747392   14838 start.go:728] Will try again in 5 seconds ...
	I0314 11:22:11.749520   14838 start.go:360] acquireMachinesLock for default-k8s-diff-port-610000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:22:11.749832   14838 start.go:364] duration metric: took 218.458µs to acquireMachinesLock for "default-k8s-diff-port-610000"
	I0314 11:22:11.749920   14838 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:22:11.749934   14838 fix.go:54] fixHost starting: 
	I0314 11:22:11.750362   14838 fix.go:112] recreateIfNeeded on default-k8s-diff-port-610000: state=Stopped err=<nil>
	W0314 11:22:11.750380   14838 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:22:11.754727   14838 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-610000" ...
	I0314 11:22:11.758705   14838 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:8b:a3:82:54:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/default-k8s-diff-port-610000/disk.qcow2
	I0314 11:22:11.765802   14838 main.go:141] libmachine: STDOUT: 
	I0314 11:22:11.765859   14838 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:22:11.765930   14838 fix.go:56] duration metric: took 15.99825ms for fixHost
	I0314 11:22:11.765943   14838 start.go:83] releasing machines lock for "default-k8s-diff-port-610000", held for 16.097042ms
	W0314 11:22:11.766109   14838 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-610000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-610000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:22:11.774575   14838 out.go:177] 
	W0314 11:22:11.777768   14838 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:22:11.777786   14838 out.go:239] * 
	* 
	W0314 11:22:11.779722   14838 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:22:11.787690   14838 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-610000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000: exit status 7 (66.176083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-610000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-178000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000: exit status 7 (33.964875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-178000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.684917ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-178000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-178000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000: exit status 7 (30.818083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-178000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000: exit status 7 (30.996083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-178000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-178000 --alsologtostderr -v=1: exit status 83 (43.6295ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-178000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-178000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:22:07.824759   14857 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:22:07.824905   14857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:07.824908   14857 out.go:304] Setting ErrFile to fd 2...
	I0314 11:22:07.824911   14857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:07.825033   14857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:22:07.825270   14857 out.go:298] Setting JSON to false
	I0314 11:22:07.825276   14857 mustload.go:65] Loading cluster: embed-certs-178000
	I0314 11:22:07.825456   14857 config.go:182] Loaded profile config "embed-certs-178000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:22:07.829873   14857 out.go:177] * The control-plane node embed-certs-178000 host is not running: state=Stopped
	I0314 11:22:07.833688   14857 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-178000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-178000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000: exit status 7 (31.323625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-178000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000: exit status 7 (30.905625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-178000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-725000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-725000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (10.0219475s)

                                                
                                                
-- stdout --
	* [newest-cni-725000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-725000" primary control-plane node in "newest-cni-725000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-725000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:22:08.298719   14880 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:22:08.298830   14880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:08.298833   14880 out.go:304] Setting ErrFile to fd 2...
	I0314 11:22:08.298835   14880 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:08.298951   14880 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:22:08.299979   14880 out.go:298] Setting JSON to false
	I0314 11:22:08.315847   14880 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8500,"bootTime":1710432028,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:22:08.315914   14880 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:22:08.319617   14880 out.go:177] * [newest-cni-725000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:22:08.330583   14880 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:22:08.326696   14880 notify.go:220] Checking for updates...
	I0314 11:22:08.336666   14880 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:22:08.339646   14880 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:22:08.342669   14880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:22:08.345666   14880 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:22:08.348625   14880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:22:08.352001   14880 config.go:182] Loaded profile config "default-k8s-diff-port-610000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:22:08.352061   14880 config.go:182] Loaded profile config "multinode-382000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:22:08.352103   14880 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:22:08.356685   14880 out.go:177] * Using the qemu2 driver based on user configuration
	I0314 11:22:08.363607   14880 start.go:297] selected driver: qemu2
	I0314 11:22:08.363612   14880 start.go:901] validating driver "qemu2" against <nil>
	I0314 11:22:08.363617   14880 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:22:08.365788   14880 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0314 11:22:08.365812   14880 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0314 11:22:08.373533   14880 out.go:177] * Automatically selected the socket_vmnet network
	I0314 11:22:08.376670   14880 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0314 11:22:08.376688   14880 cni.go:84] Creating CNI manager for ""
	I0314 11:22:08.376697   14880 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:22:08.376702   14880 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 11:22:08.376741   14880 start.go:340] cluster config:
	{Name:newest-cni-725000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:22:08.381267   14880 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:22:08.388653   14880 out.go:177] * Starting "newest-cni-725000" primary control-plane node in "newest-cni-725000" cluster
	I0314 11:22:08.392588   14880 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 11:22:08.392604   14880 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0314 11:22:08.392614   14880 cache.go:56] Caching tarball of preloaded images
	I0314 11:22:08.392673   14880 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:22:08.392680   14880 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0314 11:22:08.392747   14880 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/newest-cni-725000/config.json ...
	I0314 11:22:08.392759   14880 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/newest-cni-725000/config.json: {Name:mke2af4dc5982104efcaa83bfbf50435e135d9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 11:22:08.392994   14880 start.go:360] acquireMachinesLock for newest-cni-725000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:22:08.393030   14880 start.go:364] duration metric: took 29.167µs to acquireMachinesLock for "newest-cni-725000"
	I0314 11:22:08.393044   14880 start.go:93] Provisioning new machine with config: &{Name:newest-cni-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:22:08.393088   14880 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:22:08.396663   14880 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:22:08.415302   14880 start.go:159] libmachine.API.Create for "newest-cni-725000" (driver="qemu2")
	I0314 11:22:08.415335   14880 client.go:168] LocalClient.Create starting
	I0314 11:22:08.415408   14880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:22:08.415440   14880 main.go:141] libmachine: Decoding PEM data...
	I0314 11:22:08.415450   14880 main.go:141] libmachine: Parsing certificate...
	I0314 11:22:08.415502   14880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:22:08.415525   14880 main.go:141] libmachine: Decoding PEM data...
	I0314 11:22:08.415533   14880 main.go:141] libmachine: Parsing certificate...
	I0314 11:22:08.415943   14880 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:22:08.580916   14880 main.go:141] libmachine: Creating SSH key...
	I0314 11:22:08.855447   14880 main.go:141] libmachine: Creating Disk image...
	I0314 11:22:08.855457   14880 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:22:08.855958   14880 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2
	I0314 11:22:08.868438   14880 main.go:141] libmachine: STDOUT: 
	I0314 11:22:08.868464   14880 main.go:141] libmachine: STDERR: 
	I0314 11:22:08.868523   14880 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2 +20000M
	I0314 11:22:08.879284   14880 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:22:08.879301   14880 main.go:141] libmachine: STDERR: 
	I0314 11:22:08.879320   14880 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2
	I0314 11:22:08.879324   14880 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:22:08.879377   14880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:68:65:d0:eb:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2
	I0314 11:22:08.880948   14880 main.go:141] libmachine: STDOUT: 
	I0314 11:22:08.880963   14880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:22:08.880983   14880 client.go:171] duration metric: took 465.639584ms to LocalClient.Create
	I0314 11:22:10.883246   14880 start.go:128] duration metric: took 2.490128416s to createHost
	I0314 11:22:10.883302   14880 start.go:83] releasing machines lock for "newest-cni-725000", held for 2.490253708s
	W0314 11:22:10.883346   14880 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:22:10.894428   14880 out.go:177] * Deleting "newest-cni-725000" in qemu2 ...
	W0314 11:22:10.930670   14880 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:22:10.930696   14880 start.go:728] Will try again in 5 seconds ...
	I0314 11:22:15.932876   14880 start.go:360] acquireMachinesLock for newest-cni-725000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:22:15.933303   14880 start.go:364] duration metric: took 337.208µs to acquireMachinesLock for "newest-cni-725000"
	I0314 11:22:15.933459   14880 start.go:93] Provisioning new machine with config: &{Name:newest-cni-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0314 11:22:15.933753   14880 start.go:125] createHost starting for "" (driver="qemu2")
	I0314 11:22:15.940430   14880 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 11:22:15.985299   14880 start.go:159] libmachine.API.Create for "newest-cni-725000" (driver="qemu2")
	I0314 11:22:15.985345   14880 client.go:168] LocalClient.Create starting
	I0314 11:22:15.985462   14880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/ca.pem
	I0314 11:22:15.985520   14880 main.go:141] libmachine: Decoding PEM data...
	I0314 11:22:15.985537   14880 main.go:141] libmachine: Parsing certificate...
	I0314 11:22:15.985610   14880 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18384-10823/.minikube/certs/cert.pem
	I0314 11:22:15.985652   14880 main.go:141] libmachine: Decoding PEM data...
	I0314 11:22:15.985663   14880 main.go:141] libmachine: Parsing certificate...
	I0314 11:22:15.986253   14880 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso...
	I0314 11:22:16.136989   14880 main.go:141] libmachine: Creating SSH key...
	I0314 11:22:16.214544   14880 main.go:141] libmachine: Creating Disk image...
	I0314 11:22:16.214553   14880 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0314 11:22:16.214755   14880 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2.raw /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2
	I0314 11:22:16.226965   14880 main.go:141] libmachine: STDOUT: 
	I0314 11:22:16.226998   14880 main.go:141] libmachine: STDERR: 
	I0314 11:22:16.227070   14880 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2 +20000M
	I0314 11:22:16.237709   14880 main.go:141] libmachine: STDOUT: Image resized.
	
	I0314 11:22:16.237726   14880 main.go:141] libmachine: STDERR: 
	I0314 11:22:16.237746   14880 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2
	I0314 11:22:16.237750   14880 main.go:141] libmachine: Starting QEMU VM...
	I0314 11:22:16.237783   14880 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:46:5b:4d:a4:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2
	I0314 11:22:16.239469   14880 main.go:141] libmachine: STDOUT: 
	I0314 11:22:16.239485   14880 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:22:16.239500   14880 client.go:171] duration metric: took 254.149917ms to LocalClient.Create
	I0314 11:22:18.241683   14880 start.go:128] duration metric: took 2.307890541s to createHost
	I0314 11:22:18.241754   14880 start.go:83] releasing machines lock for "newest-cni-725000", held for 2.3084185s
	W0314 11:22:18.242177   14880 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-725000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:22:18.255769   14880 out.go:177] 
	W0314 11:22:18.258945   14880 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:22:18.259024   14880 out.go:239] * 
	* 
	W0314 11:22:18.261724   14880 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:22:18.277778   14880 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-725000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000: exit status 7 (69.85375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-725000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-610000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000: exit status 7 (34.43025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-610000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-610000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-610000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-610000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.631375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-610000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-610000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000: exit status 7 (31.663375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-610000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-610000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000: exit status 7 (30.852666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-610000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-610000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-610000 --alsologtostderr -v=1: exit status 83 (42.509834ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-610000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-610000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:22:12.063830   14902 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:22:12.063987   14902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:12.063991   14902 out.go:304] Setting ErrFile to fd 2...
	I0314 11:22:12.063993   14902 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:12.064118   14902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:22:12.064347   14902 out.go:298] Setting JSON to false
	I0314 11:22:12.064352   14902 mustload.go:65] Loading cluster: default-k8s-diff-port-610000
	I0314 11:22:12.064529   14902 config.go:182] Loaded profile config "default-k8s-diff-port-610000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 11:22:12.069127   14902 out.go:177] * The control-plane node default-k8s-diff-port-610000 host is not running: state=Stopped
	I0314 11:22:12.072018   14902 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-610000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-610000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000: exit status 7 (31.215125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-610000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000: exit status 7 (31.039625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-610000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-725000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-725000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.188911834s)

                                                
                                                
-- stdout --
	* [newest-cni-725000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-725000" primary control-plane node in "newest-cni-725000" cluster
	* Restarting existing qemu2 VM for "newest-cni-725000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-725000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:22:21.560474   14960 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:22:21.560622   14960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:21.560625   14960 out.go:304] Setting ErrFile to fd 2...
	I0314 11:22:21.560627   14960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:21.560739   14960 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:22:21.561670   14960 out.go:298] Setting JSON to false
	I0314 11:22:21.577323   14960 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":8513,"bootTime":1710432028,"procs":380,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 11:22:21.577382   14960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 11:22:21.581895   14960 out.go:177] * [newest-cni-725000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 11:22:21.587772   14960 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 11:22:21.591793   14960 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 11:22:21.587815   14960 notify.go:220] Checking for updates...
	I0314 11:22:21.597720   14960 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 11:22:21.600765   14960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 11:22:21.602218   14960 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 11:22:21.609751   14960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 11:22:21.613113   14960 config.go:182] Loaded profile config "newest-cni-725000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0314 11:22:21.613357   14960 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 11:22:21.617730   14960 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 11:22:21.625614   14960 start.go:297] selected driver: qemu2
	I0314 11:22:21.625620   14960 start.go:901] validating driver "qemu2" against &{Name:newest-cni-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-725000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:22:21.625707   14960 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 11:22:21.628114   14960 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0314 11:22:21.628162   14960 cni.go:84] Creating CNI manager for ""
	I0314 11:22:21.628169   14960 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 11:22:21.628200   14960 start.go:340] cluster config:
	{Name:newest-cni-725000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-725000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 11:22:21.632557   14960 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 11:22:21.639791   14960 out.go:177] * Starting "newest-cni-725000" primary control-plane node in "newest-cni-725000" cluster
	I0314 11:22:21.643755   14960 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 11:22:21.643770   14960 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0314 11:22:21.643788   14960 cache.go:56] Caching tarball of preloaded images
	I0314 11:22:21.643845   14960 preload.go:173] Found /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0314 11:22:21.643850   14960 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0314 11:22:21.643920   14960 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/newest-cni-725000/config.json ...
	I0314 11:22:21.644390   14960 start.go:360] acquireMachinesLock for newest-cni-725000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:22:21.644416   14960 start.go:364] duration metric: took 20.125µs to acquireMachinesLock for "newest-cni-725000"
	I0314 11:22:21.644426   14960 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:22:21.644430   14960 fix.go:54] fixHost starting: 
	I0314 11:22:21.644556   14960 fix.go:112] recreateIfNeeded on newest-cni-725000: state=Stopped err=<nil>
	W0314 11:22:21.644564   14960 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:22:21.648765   14960 out.go:177] * Restarting existing qemu2 VM for "newest-cni-725000" ...
	I0314 11:22:21.656753   14960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:46:5b:4d:a4:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2
	I0314 11:22:21.658867   14960 main.go:141] libmachine: STDOUT: 
	I0314 11:22:21.658894   14960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:22:21.658924   14960 fix.go:56] duration metric: took 14.493875ms for fixHost
	I0314 11:22:21.658929   14960 start.go:83] releasing machines lock for "newest-cni-725000", held for 14.508583ms
	W0314 11:22:21.658935   14960 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:22:21.658967   14960 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:22:21.658972   14960 start.go:728] Will try again in 5 seconds ...
	I0314 11:22:26.661159   14960 start.go:360] acquireMachinesLock for newest-cni-725000: {Name:mk7f342f84a1fcf2acaf2ebdc885e8ee2d848fd8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 11:22:26.661474   14960 start.go:364] duration metric: took 213.375µs to acquireMachinesLock for "newest-cni-725000"
	I0314 11:22:26.661583   14960 start.go:96] Skipping create...Using existing machine configuration
	I0314 11:22:26.661599   14960 fix.go:54] fixHost starting: 
	I0314 11:22:26.662112   14960 fix.go:112] recreateIfNeeded on newest-cni-725000: state=Stopped err=<nil>
	W0314 11:22:26.662135   14960 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 11:22:26.669363   14960 out.go:177] * Restarting existing qemu2 VM for "newest-cni-725000" ...
	I0314 11:22:26.673691   14960 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:46:5b:4d:a4:ea -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18384-10823/.minikube/machines/newest-cni-725000/disk.qcow2
	I0314 11:22:26.683364   14960 main.go:141] libmachine: STDOUT: 
	I0314 11:22:26.683427   14960 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0314 11:22:26.683486   14960 fix.go:56] duration metric: took 21.887042ms for fixHost
	I0314 11:22:26.683499   14960 start.go:83] releasing machines lock for "newest-cni-725000", held for 22.009708ms
	W0314 11:22:26.683660   14960 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-725000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-725000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0314 11:22:26.692314   14960 out.go:177] 
	W0314 11:22:26.696501   14960 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0314 11:22:26.696526   14960 out.go:239] * 
	* 
	W0314 11:22:26.698953   14960 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 11:22:26.707464   14960 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-725000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000: exit status 7 (73.222209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-725000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-725000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000: exit status 7 (31.809042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-725000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-725000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-725000 --alsologtostderr -v=1: exit status 83 (46.821708ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-725000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-725000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 11:22:26.900836   14974 out.go:291] Setting OutFile to fd 1 ...
	I0314 11:22:26.900991   14974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:26.900995   14974 out.go:304] Setting ErrFile to fd 2...
	I0314 11:22:26.900997   14974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 11:22:26.901128   14974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 11:22:26.901358   14974 out.go:298] Setting JSON to false
	I0314 11:22:26.901363   14974 mustload.go:65] Loading cluster: newest-cni-725000
	I0314 11:22:26.901588   14974 config.go:182] Loaded profile config "newest-cni-725000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0314 11:22:26.904803   14974 out.go:177] * The control-plane node newest-cni-725000 host is not running: state=Stopped
	I0314 11:22:26.912689   14974 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-725000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-725000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000: exit status 7 (31.775417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-725000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000: exit status 7 (31.076083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-725000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.28.4/json-events 27.22
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.23
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.29.0-rc.2/json-events 20.03
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.23
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.36
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 9.64
48 TestErrorSpam/start 0.4
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.13
51 TestErrorSpam/unpause 0.13
52 TestErrorSpam/stop 8.02
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.04
64 TestFunctional/serial/CacheCmd/cache/add_local 1.16
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.24
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 1.41
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 5.38
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.17
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.34
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.33
202 TestMainNoArgs 0.04
249 TestStoppedBinaryUpgrade/Setup 5.54
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.43
267 TestNoKubernetes/serial/Stop 3.94
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
286 TestStartStop/group/old-k8s-version/serial/Stop 2.01
287 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
291 TestStartStop/group/no-preload/serial/Stop 3.68
292 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.14
308 TestStartStop/group/embed-certs/serial/Stop 1.98
309 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.72
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 2.99
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-659000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-659000: exit status 85 (97.401167ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT |          |
	|         | -p download-only-659000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 10:55:17
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 10:55:17.587595   11240 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:55:17.587749   11240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:55:17.587752   11240 out.go:304] Setting ErrFile to fd 2...
	I0314 10:55:17.587755   11240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:55:17.587871   11240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	W0314 10:55:17.587954   11240 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18384-10823/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18384-10823/.minikube/config/config.json: no such file or directory
	I0314 10:55:17.589235   11240 out.go:298] Setting JSON to true
	I0314 10:55:17.607161   11240 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6889,"bootTime":1710432028,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 10:55:17.607222   11240 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 10:55:17.613179   11240 out.go:97] [download-only-659000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 10:55:17.617137   11240 out.go:169] MINIKUBE_LOCATION=18384
	I0314 10:55:17.613333   11240 notify.go:220] Checking for updates...
	W0314 10:55:17.613366   11240 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball: no such file or directory
	I0314 10:55:17.625112   11240 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 10:55:17.628177   11240 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 10:55:17.631176   11240 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 10:55:17.634175   11240 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	W0314 10:55:17.640161   11240 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 10:55:17.640388   11240 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 10:55:17.644075   11240 out.go:97] Using the qemu2 driver based on user configuration
	I0314 10:55:17.644096   11240 start.go:297] selected driver: qemu2
	I0314 10:55:17.644112   11240 start.go:901] validating driver "qemu2" against <nil>
	I0314 10:55:17.644172   11240 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 10:55:17.647189   11240 out.go:169] Automatically selected the socket_vmnet network
	I0314 10:55:17.652573   11240 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0314 10:55:17.652674   11240 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 10:55:17.652786   11240 cni.go:84] Creating CNI manager for ""
	I0314 10:55:17.652805   11240 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0314 10:55:17.652856   11240 start.go:340] cluster config:
	{Name:download-only-659000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 10:55:17.657729   11240 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 10:55:17.662119   11240 out.go:97] Downloading VM boot image ...
	I0314 10:55:17.662149   11240 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/iso/arm64/minikube-v1.32.1-1710348681-18375-arm64.iso
	I0314 10:55:35.848439   11240 out.go:97] Starting "download-only-659000" primary control-plane node in "download-only-659000" cluster
	I0314 10:55:35.848471   11240 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 10:55:36.137365   11240 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0314 10:55:36.137485   11240 cache.go:56] Caching tarball of preloaded images
	I0314 10:55:36.139093   11240 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 10:55:36.144081   11240 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0314 10:55:36.144108   11240 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0314 10:55:36.744233   11240 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0314 10:55:57.544823   11240 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0314 10:55:57.545009   11240 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0314 10:55:58.245113   11240 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0314 10:55:58.245312   11240 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/download-only-659000/config.json ...
	I0314 10:55:58.245331   11240 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/download-only-659000/config.json: {Name:mk97c40282b2ef2a1091f4503050bda7aec3a889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 10:55:58.246583   11240 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0314 10:55:58.246754   11240 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0314 10:55:58.590165   11240 out.go:169] 
	W0314 10:55:58.594149   11240 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18384-10823/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0 0x106ccf2c0] Decompressors:map[bz2:0x14000893300 gz:0x14000893308 tar:0x140008932b0 tar.bz2:0x140008932c0 tar.gz:0x140008932d0 tar.xz:0x140008932e0 tar.zst:0x140008932f0 tbz2:0x140008932c0 tgz:0x140008932d0 txz:0x140008932e0 tzst:0x140008932f0 xz:0x14000893310 zip:0x14000893320 zst:0x14000893318] Getters:map[file:0x140023748c0 http:0x1400088a230 https:0x1400088a280] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0314 10:55:58.594178   11240 out_reason.go:110] 
	W0314 10:55:58.602175   11240 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 10:55:58.606099   11240 out.go:169] 
	
	
	* The control-plane node download-only-659000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-659000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-659000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (27.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-905000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-905000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (27.222345833s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (27.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-905000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-905000: exit status 85 (80.401667ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT |                     |
	|         | -p download-only-659000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT | 14 Mar 24 10:55 PDT |
	| delete  | -p download-only-659000        | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT | 14 Mar 24 10:55 PDT |
	| start   | -o=json --download-only        | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT |                     |
	|         | -p download-only-905000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 10:55:59
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 10:55:59.285085   11284 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:55:59.285228   11284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:55:59.285232   11284 out.go:304] Setting ErrFile to fd 2...
	I0314 10:55:59.285235   11284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:55:59.285348   11284 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 10:55:59.286370   11284 out.go:298] Setting JSON to true
	I0314 10:55:59.302590   11284 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6931,"bootTime":1710432028,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 10:55:59.302653   11284 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 10:55:59.306095   11284 out.go:97] [download-only-905000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 10:55:59.309983   11284 out.go:169] MINIKUBE_LOCATION=18384
	I0314 10:55:59.306228   11284 notify.go:220] Checking for updates...
	I0314 10:55:59.317011   11284 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 10:55:59.319937   11284 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 10:55:59.322976   11284 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 10:55:59.325976   11284 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	W0314 10:55:59.331957   11284 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 10:55:59.332103   11284 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 10:55:59.334961   11284 out.go:97] Using the qemu2 driver based on user configuration
	I0314 10:55:59.334985   11284 start.go:297] selected driver: qemu2
	I0314 10:55:59.334989   11284 start.go:901] validating driver "qemu2" against <nil>
	I0314 10:55:59.335021   11284 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 10:55:59.336314   11284 out.go:169] Automatically selected the socket_vmnet network
	I0314 10:55:59.340894   11284 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0314 10:55:59.340992   11284 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 10:55:59.341033   11284 cni.go:84] Creating CNI manager for ""
	I0314 10:55:59.341041   11284 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 10:55:59.341050   11284 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 10:55:59.341090   11284 start.go:340] cluster config:
	{Name:download-only-905000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-905000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 10:55:59.345217   11284 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 10:55:59.348001   11284 out.go:97] Starting "download-only-905000" primary control-plane node in "download-only-905000" cluster
	I0314 10:55:59.348009   11284 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 10:56:00.011541   11284 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 10:56:00.011609   11284 cache.go:56] Caching tarball of preloaded images
	I0314 10:56:00.012264   11284 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 10:56:00.017689   11284 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0314 10:56:00.017720   11284 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0314 10:56:00.610768   11284 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0314 10:56:18.281341   11284 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0314 10:56:18.281480   11284 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0314 10:56:18.864137   11284 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0314 10:56:18.864328   11284 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/download-only-905000/config.json ...
	I0314 10:56:18.864345   11284 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18384-10823/.minikube/profiles/download-only-905000/config.json: {Name:mk45b0170f0164da893e63bdaec0defe48eea120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 10:56:18.864600   11284 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0314 10:56:18.864727   11284 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/darwin/arm64/v1.28.4/kubectl
	
	
	* The control-plane node download-only-905000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-905000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-905000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (20.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-045000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-045000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (20.029306834s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (20.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-045000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-045000: exit status 85 (81.3715ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT |                     |
	|         | -p download-only-659000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT | 14 Mar 24 10:55 PDT |
	| delete  | -p download-only-659000           | download-only-659000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT | 14 Mar 24 10:55 PDT |
	| start   | -o=json --download-only           | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:55 PDT |                     |
	|         | -p download-only-905000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
	| delete  | -p download-only-905000           | download-only-905000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT | 14 Mar 24 10:56 PDT |
	| start   | -o=json --download-only           | download-only-045000 | jenkins | v1.32.0 | 14 Mar 24 10:56 PDT |                     |
	|         | -p download-only-045000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 10:56:27
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 10:56:27.044956   11316 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:56:27.045092   11316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:56:27.045095   11316 out.go:304] Setting ErrFile to fd 2...
	I0314 10:56:27.045097   11316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:56:27.045221   11316 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 10:56:27.046334   11316 out.go:298] Setting JSON to true
	I0314 10:56:27.062229   11316 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6959,"bootTime":1710432028,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 10:56:27.062295   11316 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 10:56:27.066959   11316 out.go:97] [download-only-045000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 10:56:27.070987   11316 out.go:169] MINIKUBE_LOCATION=18384
	I0314 10:56:27.067045   11316 notify.go:220] Checking for updates...
	I0314 10:56:27.078963   11316 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 10:56:27.081989   11316 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 10:56:27.084953   11316 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 10:56:27.087969   11316 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	W0314 10:56:27.098911   11316 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 10:56:27.099051   11316 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 10:56:27.101969   11316 out.go:97] Using the qemu2 driver based on user configuration
	I0314 10:56:27.101977   11316 start.go:297] selected driver: qemu2
	I0314 10:56:27.101980   11316 start.go:901] validating driver "qemu2" against <nil>
	I0314 10:56:27.102017   11316 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 10:56:27.104961   11316 out.go:169] Automatically selected the socket_vmnet network
	I0314 10:56:27.109976   11316 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0314 10:56:27.110073   11316 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 10:56:27.110115   11316 cni.go:84] Creating CNI manager for ""
	I0314 10:56:27.110124   11316 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0314 10:56:27.110130   11316 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 10:56:27.110167   11316 start.go:340] cluster config:
	{Name:download-only-045000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-045000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 10:56:27.114459   11316 iso.go:125] acquiring lock: {Name:mkb282eb7d87bd36e7a71d78648e2dadb7f0a89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 10:56:27.116997   11316 out.go:97] Starting "download-only-045000" primary control-plane node in "download-only-045000" cluster
	I0314 10:56:27.117007   11316 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 10:56:27.767285   11316 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0314 10:56:27.767364   11316 cache.go:56] Caching tarball of preloaded images
	I0314 10:56:27.768200   11316 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0314 10:56:27.773516   11316 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0314 10:56:27.773547   11316 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0314 10:56:28.362234   11316 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18384-10823/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-045000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-045000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-045000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.36s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-003000 --alsologtostderr --binary-mirror http://127.0.0.1:51894 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-003000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-003000
--- PASS: TestBinaryMirror (0.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-532000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-532000: exit status 85 (57.247458ms)

                                                
                                                
-- stdout --
	* Profile "addons-532000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-532000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-532000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-532000: exit status 85 (59.877875ms)

                                                
                                                
-- stdout --
	* Profile "addons-532000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-532000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.64s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 status: exit status 7 (33.138041ms)

                                                
                                                
-- stdout --
	nospam-967000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 status: exit status 7 (32.499708ms)

                                                
                                                
-- stdout --
	nospam-967000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 status: exit status 7 (32.081584ms)

                                                
                                                
-- stdout --
	nospam-967000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 pause: exit status 83 (42.369041ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-967000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-967000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 pause: exit status 83 (41.880583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-967000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-967000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 pause: exit status 83 (41.812041ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-967000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-967000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 unpause: exit status 83 (42.739333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-967000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-967000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 unpause: exit status 83 (40.021542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-967000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-967000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 unpause: exit status 83 (42.728334ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-967000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-967000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (8.02s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 stop: (2.845443417s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 stop: (3.145603583s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-967000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-967000 stop: (2.023403917s)
--- PASS: TestErrorSpam/stop (8.02s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18384-10823/.minikube/files/etc/test/nested/copy/11238/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-780000 cache add registry.k8s.io/pause:3.1: (2.087244709s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-780000 cache add registry.k8s.io/pause:3.3: (2.155393209s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-780000 cache add registry.k8s.io/pause:latest: (1.797646917s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-780000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1907695427/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 cache add minikube-local-cache-test:functional-780000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 cache delete minikube-local-cache-test:functional-780000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-780000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 config get cpus: exit status 14 (33.135ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 config get cpus: exit status 14 (36.367541ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-780000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-780000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (162.445209ms)

                                                
                                                
-- stdout --
	* [functional-780000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 10:58:41.202929   11952 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:58:41.203089   11952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:58:41.203094   11952 out.go:304] Setting ErrFile to fd 2...
	I0314 10:58:41.203097   11952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:58:41.203249   11952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 10:58:41.204413   11952 out.go:298] Setting JSON to false
	I0314 10:58:41.223894   11952 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7093,"bootTime":1710432028,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 10:58:41.223962   11952 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 10:58:41.229501   11952 out.go:177] * [functional-780000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0314 10:58:41.236380   11952 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 10:58:41.240393   11952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 10:58:41.236470   11952 notify.go:220] Checking for updates...
	I0314 10:58:41.247339   11952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 10:58:41.250353   11952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 10:58:41.253327   11952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 10:58:41.256361   11952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 10:58:41.259650   11952 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 10:58:41.259934   11952 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 10:58:41.264301   11952 out.go:177] * Using the qemu2 driver based on existing profile
	I0314 10:58:41.271412   11952 start.go:297] selected driver: qemu2
	I0314 10:58:41.271419   11952 start.go:901] validating driver "qemu2" against &{Name:functional-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 10:58:41.271484   11952 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 10:58:41.278368   11952 out.go:177] 
	W0314 10:58:41.281325   11952 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0314 10:58:41.284285   11952 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-780000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-780000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-780000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.822125ms)

                                                
                                                
-- stdout --
	* [functional-780000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 10:58:41.438890   11963 out.go:291] Setting OutFile to fd 1 ...
	I0314 10:58:41.438996   11963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:58:41.438999   11963 out.go:304] Setting ErrFile to fd 2...
	I0314 10:58:41.439001   11963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 10:58:41.439129   11963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18384-10823/.minikube/bin
	I0314 10:58:41.440436   11963 out.go:298] Setting JSON to false
	I0314 10:58:41.456995   11963 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7093,"bootTime":1710432028,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0314 10:58:41.457067   11963 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0314 10:58:41.462414   11963 out.go:177] * [functional-780000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0314 10:58:41.468304   11963 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 10:58:41.472357   11963 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	I0314 10:58:41.468396   11963 notify.go:220] Checking for updates...
	I0314 10:58:41.479320   11963 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0314 10:58:41.482363   11963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 10:58:41.485397   11963 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	I0314 10:58:41.488331   11963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 10:58:41.491628   11963 config.go:182] Loaded profile config "functional-780000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0314 10:58:41.491890   11963 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 10:58:41.496358   11963 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0314 10:58:41.503276   11963 start.go:297] selected driver: qemu2
	I0314 10:58:41.503282   11963 start.go:901] validating driver "qemu2" against &{Name:functional-780000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-780000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 10:58:41.503332   11963 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 10:58:41.509389   11963 out.go:177] 
	W0314 10:58:41.513321   11963 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0314 10:58:41.517335   11963 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.412379916s)
--- PASS: TestFunctional/parallel/License (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.337192625s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-780000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-780000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image rm gcr.io/google-containers/addon-resizer:functional-780000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-780000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 image save --daemon gcr.io/google-containers/addon-resizer:functional-780000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-780000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "73.824042ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.734167ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "71.12275ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "36.792417ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012553959s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-780000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-780000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-780000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-780000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-620000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-620000 --output=json --user=testUser: (3.33588125s)
--- PASS: TestJSONOutput/stop/Command (3.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-822000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-822000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.906042ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e0ab336f-a1c1-4185-b579-0ca8d4b64520","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-822000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"51167a60-3d7d-40be-b140-3332fab9e32d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18384"}}
	{"specversion":"1.0","id":"3bcf9756-2250-47f3-9472-2e219cb235ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig"}}
	{"specversion":"1.0","id":"dce18bfe-9e46-4413-a5ed-087a6ee0f124","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"bfb37ce5-d562-4b7a-a90f-98876c988cd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5604e9a8-dc70-4128-9b9b-7a6c754e2a45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube"}}
	{"specversion":"1.0","id":"481489f9-641d-4102-9837-eae8ada8181f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"23497bdb-02c1-4fec-b3d0-9dcc693ff121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-822000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-822000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-893000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-893000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.736ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-893000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18384
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18384-10823/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18384-10823/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-893000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-893000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.235875ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-893000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.711157041s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.721402333s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-893000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-893000: (3.94323025s)
--- PASS: TestNoKubernetes/serial/Stop (3.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-893000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-893000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (46.901583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-893000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-893000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-157000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-885000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-885000 --alsologtostderr -v=3: (2.010136125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-885000 -n old-k8s-version-885000: exit status 7 (58.182584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-885000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-861000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-861000 --alsologtostderr -v=3: (3.684009167s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-861000 -n no-preload-861000: exit status 7 (62.553791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-861000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (1.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-178000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-178000 --alsologtostderr -v=3: (1.976853625s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (1.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-178000 -n embed-certs-178000: exit status 7 (43.163792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-178000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-610000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-610000 --alsologtostderr -v=3: (3.715831708s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-610000 -n default-k8s-diff-port-610000: exit status 7 (56.702625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-610000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-725000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-725000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-725000 --alsologtostderr -v=3: (2.988479458s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-725000 -n newest-cni-725000: exit status 7 (56.739291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-725000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-780000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3460547925/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710439083877648000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3460547925/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710439083877648000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3460547925/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710439083877648000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3460547925/001/test-1710439083877648000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (57.265708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.154292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.102334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.796125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.362084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.007625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.36325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.745208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "sudo umount -f /mount-9p": exit status 83 (49.567ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-780000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-780000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3460547925/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (14.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (10.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-780000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2163104104/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (62.091958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.647167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.143625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.337417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.945083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.525708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.917166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "sudo umount -f /mount-9p": exit status 83 (49.721916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-780000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-780000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2163104104/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (10.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-780000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3632899411/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-780000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3632899411/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-780000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3632899411/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1: exit status 83 (85.950167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1: exit status 83 (87.72575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1: exit status 83 (86.93575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1: exit status 83 (87.718333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1: exit status 83 (84.7295ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1: exit status 83 (88.102917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-780000 ssh "findmnt -T" /mount1: exit status 83 (87.49025ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-780000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-780000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-780000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3632899411/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-780000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3632899411/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-780000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3632899411/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.54s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-912000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-912000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-912000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-912000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-912000"

                                                
                                                
----------------------- debugLogs end: cilium-912000 [took: 2.302457375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-912000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-912000
--- SKIP: TestNetworkPlugins/group/cilium (2.55s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-923000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-923000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard